Friday, 28 April 2017

Cancer Radiation Treatments

>> good morning, i'd like first to thank the organizer for giving me this opportunity to share with all of you the challenges that we face at rtog of the bioinformatics technical issues and the projects that we undertake to try to resolve those issues. so for those of who is not familiar, rtog which stands for radiation therapy oncology group is the institution that funded in 1968 and funded through nci to contact clinical trial for adult cancer with the objectives to improve the survival outcome and quality of life, and to evaluate new forms

of radiotherapy delivery techniques and to test new systemic therapies in conjunction with radiotherapy and to employ translational research strategies. now, the bioinformatics working group is formed within rtog to facilitate the development and to develop personalized predictive models for radiation therapy guidance with specific characteristics information of patients and treatment with integrated clinical trial databases to breech clinical science, physics, biology and information technology and mathematics.

now, the two major components of the bioinformatics efforts at rtog as with all bioinformatics efforts are database or database integration and data analysis. for the database, we have rich collections of rt dose, rt images and clinical data as well as genomic, proteomic information from biobanks and biomarker information, and to mine data and perform data analysis from these databases, we could help protocol development, protocol operation and to facilitate trial outcome

and secondary analysis in other related research. so in the following slides now present a number of examples of projects that are undergoing or we're trying to start its rtog bioinformatics group. and the first two are for data and data integration. and you see from this table, we have a vast clinical data from a number of clinical trials that cover multiple disease sites which include head, neck, lung, prostate. and you can see that this data can be used to model the trauma control probability and to model toxicity

such as the delivery function and late or acute gu/gi toxicities. and a number of projects were quite successful by dr. deasy and dr. tucker that was successfully funded this through nci. now, so this example is the project that rtog has started with this collaboration maastro is an institution from netherlands where advanced radiotherapy research has been going on. and dr. andre dekker the head of maastro knowledge engineering spent half a year late last year with rtog and we have established this collaboration setting up the system

of rapid learning, computer assisted diagnostics between rtog and maastro. so the following slides were used for by dr. andre dekker at the end of his visit at rtog to report the progress of this project. so why do we need rapid learning and computer assisted diagnostics? that's-- it's because that we wanted to achieve personalized medicine that improve survival and quality of life. as you see from this graphic, there is explosion of data with years that in addition to the general clinical information, we have structure a genetics information from it for example,

and functional genetics information and proteomics and other effective molecular information in addition to diagnostic imaging of functional and anatomical type. and how can we use this explosion of data to make clinical decision is essential for us to move forward with personalized medicine. and one example that was presented with that, an experiment was conducted that eight radiation oncologist were presented with 30 patients with non-small-cell lung cancer and been asked to predict two-year survival from the information

of this patient characteristics. and you can see that the performance from these eight radiation oncologists has the area under the curve of 0.57. we understand that 0.5 is pretty much random prediction and one being a perfect model and 0.85 to 0.9 would be the clinically acceptable. so this is not too far from a random prediction. and so, now how do we get data for rapid learning? and the problem is not just technical, rather they are ethical,

political as well as administrative in terms of the time that's required to get the data together, political one who owns the data and ethical one, how do we maintain the privacy of patient. now the cat approach is that an it infrastructure is being developed to make the radiotherapy centers semantically interoperable that takes care of administrative issue and the data actually stays within the institution that takes care of the ethical issue under the full control of the institution that result to political issue. so the component of this cat system are the data exported from ctms

and pacs system to be converted to etl to be deidentified and then filtered into a oncology database, and then the user would just query and retrieve from such a database to obtain outcome in standard format of xml or dicom. and the application can be shared to analyze this data or distributed learning algorithm can be performed off this data. now the key features of the system is that there is no sharing of data and truly federated. and both the community data and clinical trial data can be connected together

and we use the extended nci oncology library and formal additions to this library, and we use five languages and five countries and five legal systems have tested with this system. and the major focus now is on radiotherapy and we have a lot help from industry involvement. and this is the network as it stems so far, and we're actively talking to chinese centers to see either we can extend operation, this cat system to china as well as india and other countries.

so one example that's shown here to demonstrate this-- how this system works are that we connect the database from rtog 0522 the-- to test the model on laryngeal carcinoma that was developed from maastro group of patients. and these are the input parameters that went into the modeling and the outcome. we studied were overall survival. and so this is just shows how we query the larynx oncology database. and this is the result that we obtained from our research together and it's showing the area under the curve plot as well

as the stratified survival curve. and then, dr. andre dekker went ahead and tested this distributed learning architecture where the-- instead of sending the data out, the parameters from the model were sent to the centralized server and we did manipulate it with updated model to be sent back to the individual model servers. and this iterative process would produce a final optimized modeling. so in here you can see that the performance of-- from the distributed learning operation is somewhat better than the model

that we obtained from individual databases. so there are a number of obstacles to move this forward so that more institutions can adapt this system. one of them is that the cost associated with and we're trying to obtain funding so that we can use all open source components so that it can be more readily accessible for individual institutions. now, the second project that we just started to explore is that we are under the guidance from nci to explore the possibility of clinical trials comparing carbon, proton and photon radiotherapies.

and we have invited dr. stephanie combs to our rtog bioinformatics meeting in june. and when she presented the database system that is used at the particle center in germany that's called ulice. so, particle therapy is a very new and promising technique in radiation therapy for cancer treatment. now we have carbon accelerators and protons. the advantage from particle therapy are that there is more precise dose delivery to the target thereby offering the advantage of sparing normal tissue

and organs at risk, and also enhance radiobiological effective from carbon ions so to have the potential to increase local tumor control. now how do we demonstrate from the clinical outcome the disadvantages theoretically that had been explored from radiobiological research or physics research that we need to organize randomized trials to establish the clinical advantage of this particle therapies? now, heidelberg ion-beam therapy center, they started to treat patient at the end of '09 and the main focus is to have clinical studies to evaluate the benefits

of ion therapy for several indications. and the ulice project is the union of light ions centers in europe and they get together to develop a database with translational access to perform international clinical multicenter studies and should be accessible by both external or internal oncology students and researchers. so they were trying to establish a common database for hadrontherapy to exchange clinical experience to set up standard and harmonize study and treatment concept to transfer know-how. and their paper has just been published in radiation oncology the july

which appeared a few days ago. and their approach is that they establish a centralized web-based system to have interface to existing information systems of the hospital so that to avoid redundant entry and to offer study-specific modules and they have implement security and data protection measures to fulfill legal requirements. and their database is rich in sql database with capability to be dynamically extended and interfaces to the extended and dicom and hl7 and with java applet for manual import of data or receiving

and sending data with dicom. and the underlying components are compliant with ihe framework. and so they have the capability to exchange store process and visualize both text data as well as dicom data. and this is the diagram of the-- of their structure that you see that their documentation system can be interfaced with standard hospital or other information systems, either with hl7 or dicom standards and is secured through gateways.

so the security concept that they adopted for https protocol and they are tiers of user authority with account name and password, and the patient data can actually be pseudonymized and depending on the authority level, the viewer can either view the real name or the identified information. so they have been using this system for a few months now and documented 900 patients. and this-- they were able to exchange and store various dicom rt data to be viewed by dicom rt ion viewer.

and their huge effort includes extent automatic and alectronic study analyses. so now that-- this is a very nice system that can perhaps resolve the issue of the european light ion centers if we start to contact clinical trials between us and europe, and perhaps japan, would be the optimal data integration method, could it be centralized or federated, we still need to work on these issues. now moving forward, in addition to the clinical data that was used to check the modeling, what can we do to improve the area end of the curve performance of this model?

so obviously we need to have a larger database that contain more patient information or more and more diversified parameters. now, there is this simple geometrical information from ct that one could incorporate into the modeling. also, the biomarker information and radiomics, the generic-- genetic information and be combined with biomarkers and the clinical data to hopefully improve the performance of the modeling to the clinically acceptable level. [ pause ]

so now comes to the second component of the bioinformatics effort, we come to data analysis. so the following example, i'll present the data mining that we have undertaken so that we can go towards evidence based radiation therapy quality assurance. so, the first example is on clinical target definition. now, why is it important to perform radiotherapy quality assurance? there are two examples that i have included here. one is from trog trial 02.02 head and neck trial where the outcome is actually not governed by the technique

that the clinical trial has started to compare, but it's governed by the quality of the therapy that was given to the patient. as you can see from the separation of the survival curve, the compliant patient had a much better survival curve as compared with patient who received therapy that were not compliant with the quality specification from the protocol. and one of the major violation of the quality is actually target definition that is missing targets, either from the target definition or the radiotherapy planning or wrong prescription,

and some of the duration of the treatment were to extend also. and-- another similar example from rtog 9705, pancreatic cancer, and we can see similar performance that there's a separation of the survival not from the techniques, the clinical trial we set out to compare, but from the difference in the quality that was given to the patient. and again, the evaluation of the quality are with target definition and missing part of the targets in the treatment. so learning from the past experience, we have set out to perform this study before we activated rtog 11 or 6,

the adaptive protocol for treatments of lung cancer. so we have collected three dry run cases and sent it to about 12 institutions and asked expert to contour the targets as well as critical structures for this three lung cancer patients. and you can see the distribution of these contours from the 12 experts. and the mean sensitivity is actually 0.81 with a large extended deviation of 0.16. and this is the variation of oar and you could see the difference between the contours of this oar.

and the consensus contour that's in thick line is plotted against all the individual contours from around 12 experts and you could see a pretty substantial spread. now what are the impacts of this variation in the contours? and we have evaluated the tumor control probability using the consensus contour and we found out that by doing so, the tumor control probability can be reduces up to 100 percent as compared with what institution submitted in terms of the dose matrix. so, that is substantial finding.

now can we use that to maybe explore the unexpected result from rtog 0617 when we are trying to compare the outcome with extended rt dose of 60 gy with 74 gy? and from the interim analysis that was presented at last year's astral, the high dose had to stopped because of the infertility of continuing with the trial that we have not demonstrated in advantage with 74 gy and we will not be able to with the rest of the accrual. so could our prior investigations point to one of the possible reasons to explain this unexpected outcome, that is one of the projects that is currently undertaken

by the scientist at rtog. now, we have also used the data that we have collected for clinical trial quality assurance for image guided radiotherapy-- evidenced based quality assurance criteria establishment. so for image guided radiation therapy credentialing, we asked institutions to submit dicom data as well as the shift information along with this dicom data. and there a number of steps that we have established current quality assurance criteria.

first, we start out to evaluate the different performance from multiple systems and obtain the uncertainty that is associated with different imagery registration systems and that was incorporated in the passing criteria that we used to review the igrt credentialing. and then we set out to credential igrt for a number of disease sites along by the neck and reported the outcome from this igrt credentialing and-- from-- and we published its result. and from this investigation, we have found out what is the most impactful item of the igrt and we have adopted our credentialing process accordingly.

and the following two examples are for evidence based quality assurance of radiotherapy planning, especially for intensity modulated radiotherapy. in one group from duke, jackie wu and yaorong ge, they were invited to present their research at the january bioinformatics working group meeting. they took head and neck imrt and used the anatomical and physiological factors and quantify their individual influence in mathematical modeling and machine learning. and the code treatment planning

and experience guidelines using knowledge engineering and they established a model to use these factors to offer peer review type of guidance. now, this is their-- the example of their result. the red lines are from their modeled upper and lower level of dvh, and the blue line is the actual dvh they obtained from the clinic. and you could see that the performance is relatively good. and moving forward, we hope to strengthen the collaboration with them so that they can test their model on the bigger rtog database.

and also, they plan to use all types of knowledge sources to incorporate into their predictive models and to use the extended ontology framework to with decision support. and one example is that they wanted to incorporate the contact quantec guideline in their decision making support. and another similar project by kevin moore who was invited to-- presented at june rtog bioinformatics meeting, they assume a similar approach that they have identified need that imrt plans are not always optimal and how can we predict what kind

of quality given new patient characteristic. and what they did is that they modeled the geometrical shape of a target in critical structure and modeled this premises so that they can actually predict the dvh outcome. and this graph shows that the red lines are clinical approved dvh, and these blue lines are average rectum model, and the black lines are from the refined rectum model. and you see it is close resemblance of the model performance to the clinically approved dvh.

so this prediction for a new patient. and now, going forward again, we intend to establish the collaboration with these group of researchers to test out the model with rtog database and perhaps we could use the results from this investigation to help us with plan quality assurance for clinical trials in future endeavors. so now with the database and database integration, it-- we would be in great need of analytical method for us to extract information from this data. so, a group of researchers that's associated

with rtog had undertaken the project to study the algorithm that can resolve some of the challenges that we face in the data analysis. so one of the challenges that we face is that there is tremendous uncertainty that is associated with all the data that we are analyzing. so, with the conventional frequenistic inference method, for example with maximum likelihood estimation confidence intervals and p-values, the uncertainty is not taking into account in a genetic manner. however, with bayesian influence method, we have worked together with mathematician that we introduce the concept

of belief and possibility and also, we used dempster-shafer theory when we can actually take into account the uncertainty within the analysis process. and this research has just been accepted by physics in medicine and biology. and-- so the belief and plausibility prediction is plotted against the uncertainty range of the data points for radiation pneumonitis data that we extracted from the contact publication. and we test this against the conventional ntcp model and it shows that the-- our result is very much in line with the convention ntcp parameters such as td50

and ms. so we hope to use this to more data and databases to offer a new way of visualizing the data to make clinical decisions. now, future directions, and we just saw the new funding opportunity announcement early in the week that we are going to regroup into new cognitive centers as well as consolidating the quality assurance for radiation therapy and imaging. now we call the iroc group. how do we consolidate and integrate all the data together along with tissue bank statistics data is a challenge that all of us are facing

and it's going to be an exciting period for the next number of months and years. thank you for your attention.

No comments:

Post a Comment