Wednesday, 7 June 2017

Clinical Trials For Prostate Cancer

welcome to theafter lunch session. we're covering dataand technology, and we're going fromultra scan machines to international databases andelectronic medical records, so quite a broad series oftalks from our five speakers. so again, just areminder, all you twitteratti, it's#cancerinnovations if you want to keepthe conversation going, as tony jones says.

so what we'll do, we'llkick off the first speaker, which is vanessa connorsfrom the radiation oncology department at coffs harbour. he's going to talk aboutthe use of bladder scanners and simulation to achieveconsistent full bladder volumes. thank you, steve, and thankyou to the cancer institute of new south walesfor this opportunity to present the work we'vedone at the north coast cancer

institute. the north coast cancerinstitute operates across three sites onthe mid-north coast. we're located at lismore, coffsharbour, and port mccurry. 25% of our workload is made upof prostate radiation therapy patients that require a fullbladder and an empty rectum. and this is to helpdecrease their toxicities to the bladder, rectum,and small bowel. noncompliance with therequired bladder preparation

was having a largeimpact at all centers, and it was becoming apparentthat patients were often struggling to fill theirbladder to the required volumes or were overfilling andbecoming uncomfortable. so during treatment,if a patient does not have the requiredbladder volume, we would take thepatients off the couch to resolve their bladderproblems before we could continue to delivertheir treatment.

the emphasis on bladder fillingwas becoming very stressful for the patientsand for the staff. sorry. so an in-house study wasconducted at the north coast cancer institute to increasebladder volume reproducability for prostate radiationtherapy patients through the development of amethod of assessing bladder volumes at ct simulation usinga verathon 9400 bladder volume instrument.

filling techniques andtime delay variations were explored in order toestablish a procedure that would increase consistencyand compliance of patients. 524 bladder volumes were analyzefrom our prostate planning assessment data. they were collectedbetween november, 2008 and november, 2011. the data show thatbladder volumes sizes range from 14.7millimeters to 1.5 liters,

with the average bladdervolume being 321.2. as seen on the graph,these bladder volumes were compared to thev50, less than 50 gray, which is 50% of thebladder receiving 50 gray. the constraints, if we cankeep it below the 50 gray, ensures we canreduce the toxicity to the bladder andthe small bowel. so we concluded, therefore,aiming for a bladder volume between 250 and350 mils, chances

of exceeding this constraintwas reduced to less than 5%. so initially, weconducted a pilot study with five patientsthat were asked to arrive at the ctsimulation appointment one hour beforehand withoutfilling their bladder. they were thengiven instructions to empty their bowel and bladderin the bathrooms provided and given 600 mils of water. a bladder scanner was performed30 minutes post drinking,

45 minutes post drinking,and an hour post drinking. if at any one ofthese scanning points their bladder volume wasgreater than 250 mils, we would proceed throughwith the set up and ct scan. using an excel spreadsheet, werecorded the bladder volumes at each of those time periods. the approximate time betweenthe last bladder scan and the ct scan, the volume of the bladderoutlined on our planning scans, and a comment related tohow their general hydration is.

we also recorded weeklyct cone beam scans that were taken on the machine,a comment for each of these. so the aim of thepilot study was just to identify any issues with theproposed bladder scan process before we went toa larger group. so one of the firstobstacles we discovered was 100 mil variations seenbetween the bladder scanner volume and our focalcontour volume. the average time between thelast bladder scan and the ct

scan was approximately10 minutes. this time delay didnot significantly contribute to the variationthat was being found. this led us to invite theproduct representative to all threedepartments in order to recalibrate thebladder scanners. it also had beena number of years since formal training wasconducted on the bladder scanner, and during thistime, many new staff

had started workingin the department. this hands on in-service heldby the product representative decreased variationsseen from human error. and we were alsonotified that there is to be expected a discrepancyof approximately plus or minus 15% when using thebladder scanner. so even after the calibrationand staff training, there was still a discrepancy. taking this into account, wechanged our minimum bladder

volume when scanning to equalto or greater than 150 mils. so the main study wasconducted with 17 patients. we asked them to followthe same instructions as given to those inthe pilot study. patients deemedineligible from the study were prostate bedpatients that had not had bloods taken prior totheir simulation appointment as it's hard to stickto a one hour time slot when waiting for bloodresults, and patients

with restrictive fluidintakes such as those on certain medications, maybeheart medication, and those with renal failureand on daily dialysis. so we've changed ourpractice, and bloods are now taken and tested the daybefore the ct appointment just to ensure there's no delays. part of our staff trainingincluded a generalization ct scan comment generatedat treatment time. staff were asked to usethe following template.

say the bladder equalsa 1/4, 1/2, 3/4, more, or equal to our planned volume. the rectum is good, too muchgas, or too much matter. and our mils were eithergreater or less than three mils, and if we had to re-educatethe patient on either bladder filling or emptyingtheir rectum. so the resultingcomments were then categorized into five groups, sowe had the bladder equal plan, bladder greater thanplan, or bladder greater

or equal to half. and these are all in theblue, which equal a pass. this means the patient couldcontinue on to treatment without being taken off the bed. the red group, which is thebladder is less than half or too large, andrectal issues equal a fail, which meansthe patient's taken off the bed to either fill theirbladder, empty their bladder, or sort out their bowel issues.

so the data collectedfrom the 17 patients in the bladderscanning group were compared to 17 patients whodid not follow the procedure, but were still havingtreatment at the same time. the bladder scanningcohort had a bladder volume between 221.2 to 588.1 ccs. the data retrospectivelycollected from the non-bladderscanning group illustrated a much largerrange from 184.2 to 756.5 cc.

maximum v 50 from thebladder scanning group was 46.4% with the averagebeing 24 and the maximum v 50 for the non-bladderscanning group was higher at 50.9% withthe average being 27.3%. when looking at the weeklycone beam ct scan comments, the non-bladderscanning group were able to proceed to treatmenton the basis of a pass only 75% of the time, whereas thebladder scanning group were evaluated as apass 92.7% of the time.

so this increase is in orderof about 17.7% compliance. so overall, the resultshave been positive and shown the bladder scanneris a useful tool in helping achieve consistent andappropriate size bladder volumes in prostate patients. because of thesefindings, we have revised our current patientbladder preparation letters and bladder filling procedures. our dietitians also refinedtheir bowel preparation

requirements, andwe've introduced the use of a dailymagnesium supplement in an attempt todecrease bowel issues. from ct simulation,patients are now asked to arrive 45 minutes earlyto simulation where they're instructed to drink600 mils of water. the bladder scanner is then usedto assess the bladder volume after 45 minutes, and weproceed to the procedure if the bladder is greaterthan or equal to 150 mils.

the final bladdervolumes recorded in the bladder andbowel assessment, which is then transferred to oursite set up for easy reference during treatment. and the success ofthe bladder scanner has been extended to allbladder filling patients throughout the department. so currently on treatment,we're using this flow chart when the bladder scanner isneeded to help assist patients

with bladder filling. the volume to aim foron the bladder scanner will be recordedfrom simulation. if the volume is not reached in45 minutes, clinical judgments to be used, whetherthe patient may be required to drink morewater or wait an additional 15 minutes before being re-scanned. currently, we'refocusing our efforts into refining the bladderscanner on treatment

and whether it should be usedroutinely on day one to three, or if it should remainat an at needs basis. in the near future, we alsohope to explore the need to individualize drinkingamounts and time delays. so recently, we've goneback and analyzed results from the past few months. and in may, we carried outan audit on 20 coffs harbour patients to see ifwe're still achieving acceptable compliance.

620 images were analyzed,and of these, only 22 failed. 11 were due to bowel, and11 were due to bladder. so this is a further increase incompliance, which is about 3.8% compared to last time. but it gives us an overallcompliance of 96.5%. and so i'd like totake this opportunity to thank leah cramp and mareewoods for all their hard work into this study and to theteam at ncci for all their data recording.

thank you. [applause] thank you, vanessa. anyone got any questions,uncross your legs and come to the frontof the auditorium. extra cup of coffee. just a quick question, haveyou had much difficulty with inter-observer variabilityin one rt to another? do you want to justcomment on if you've

looked at that at all? initially when wehad our training, it really highlighted thedifferent ways that many of us used it. even after the training, therewere probably two or three that took charge, myself was one,and there was another lady that we really tried to putthe training into practice. but people justwon't seem to follow. so usually, one rt would try,and if they don't actually

get a volume,they'll come and find someone who's a bitmore experienced to see if they can getmore of a volume. so we can continueon to treatment. at this stage, wejust can't seem to get the staffas excited as we are about our bladder scanner. ok, thank you, vanessa. our next speaker isprofessor tim shaw

from the university of sydney. he's going to be talking abouta new qstream elearning method to look at best practicecancer care through knowledge retention, following on alot of the primary care talks we've had this morning. thanks very much. and i was trying to thinkof a segue between that talk and mine. and qstream was actuallydeveloped by a urologist.

perhaps hence the name. so i think there isa connection there. anyway, before i start,i'd like to acknowledge that this is a project thatwas funded by cancer australia. and i know lauren deutschis in the audience, and rob sutherland. and then, i also worked withvivian milch, sue sinclaire, and kathleen mahoney. and obviously, myteam at wedg, which

is bobby moore andjames nicholson. so i guess beforei start, i want to just lift a littlebit out from education and look at thechallenge we face. really, if you lookat a theater like us, it's been known for a long timethat this is the worst possible place to go to actually learnand change your practice. that's not why we're here. there's a whole lot ofreasons to go to conferences,

and it's networking andthose types of things, but we know that it'sbad at changing practice. if you look ate-learning, the evidence is pretty pooraround that as well. if you really dig around,there's not a lot of evidence. there was a bigpaper that came out from the us last year lookingat studies in high schools, actually, and medical cpd, andthe evidence is pretty weak. i think the other keything with online learning

is that it's pretty damnboring a lot of the time. there's a lot of reallyboring stuff out there. how many people havedone an online course that just blew their socks off? one. that's fantastic. i'll have to talk to you later. so i guess the otherchallenge, as well, when you're talk abouteducation-- and i have to say,

i snuck off and slippedthis in after john's talk earlier on-- buteducation is just part of the jigsaw puzzleof implementation science. and so many educationalproducts are developed and put totally out of context ofthe broader implementation. i now call myself animplementation scientist rather than an educationist. it's better for funding. but also, it is areflection that i

think that educationneeds to be considered. if you want to actuallyhave change in practice, then you got to consider ina broader context of practice and service delivery. so what do we try andachieve with this project? we were trying todevelop a program for general practitionersthat's evidence based, appeals to busy gps, and as i said,impacts on their practice. so i was lucky enough to spenda year at harvard medical

school about fouryears ago, and i met price kerfoot, who's theurologist that created qstream. and it just reallycaptured my attention. and i kind of callit-- and he does as well-- anti-online learning,or perhaps micro-learning. it's kind of deconstructingonline learning down to some very small,bite sized pieces. and he's a surgeon, sothat's what surgeons do. kind of fits inthe border almost

with incidental learning. so it's the kind of learningyou do as you move through life. so to give a bit of abackground to-- i've called it spacededucation, it was a company spun out of harvard. and it was originallycalled spaced ed. now it's called qstream. so i mix that up sometimes. so it was a highlyevidence based program.

it has impressivepublished results, and i'll talk about acouple of those in a moment. what grabbed me was it'sbeen very well received by clinicians on the whole. it takes littletime to complete. it's very scalable. and you get instantfeedback on performance. it's based on twoeffects in education one of which i thinkeverybody knows.

if you test somebodyon something, you tend to capture attention. they remember it betterthan sending them a whole pile ofpassive material. but the second trick thatprice combined with this is the psychological findingthat if you repeat information over time, you get much, muchbetter retention of memory. and we're starting to now showactual significant behavior change.

advertisers haveknown that forever. that's why your kid's advertsare repeated over and over again so they force you togo out and buy something. it's the repetitivenature of that, not in case you missed it. so just a little bit aboutthe data of spaced education or qstream, this isthe slide that i first saw that price presented. he's a va, veteransaffairs, surgeon.

and so he has the luxuryof a full electronic record to do experiments with. and the square box, thesmall square box, he ran a small qstream programover about a half a year period, a bit over a year. this was in primarycare doctors. and then, he tracked themout over a period of time. and this was looking at theirinappropriate psa screening. he did inappropriate becausethere's not a lot of argument

about inappropriate. and he managed to show reallyquite a significant drop. this is from just doinga small, online course in their practice. and you can see, it's actuallysustained out to 108 weeks there. the line i really like there isthat dotted line you can see, which is the uspstf. i'm not sure whatthat stands for,

but it's basicallywhere the government did a massive campaign,purely coincidentally on inappropriatepsa screening, right in the middle of his study. and it had no impact onwhat the primary care doctors were doing. it was just quite aninteresting little side product. in terms of the work i'vedone with qstream since then, i did a study will all theinterns at harvard medical

school about three years ago. now, it has showed a significantimpact on safety and quality. we've just done a study withjane phillips at sacred heart where we're getting some reallynice pilot data around actually impacting on nurse'sassessment of pain, and also we'reactually getting kicks in the actualself-reported pain scores from people doing this program. the last point, i'mrunning a program

as we speak across about 600nurses in brigham women's hospital in the us. and it's just really hit a note. i've actually gotcompetition there. i've got all the wardscompeting with each other, and it's become quitea furious battle. but i got this lovelyquote the other night that it was about the mosttalked about education activity in the memory ofhospital leaders.

as an educationist, that'sjust a beautiful feedback. i could go away and dienow and be a happy man. so what does it look like? it's terribly simple. these are just four screenshots that bobby did up of one of the questions wehave in the qstream program. so all you do is you'represented with a case, the top left. we put a lot of work intodeveloping really sharp cases.

many have four or fiveshort answer questions. and it's not rocket science. you submit that. you're then, on thetop right, you're given feedback on yourperformance versus your peers. and then down the bottom,you won't be able to read, you're given reallysuccinct feedback and links out to resourcesand data around that. what we're startingto actually do

is put actual hospitaldata in that feedback as an audit andfeedback tool, which i think is reinforcing again. and in the bostonprograms, i actually went and shot a pilot--well, i got the chief nurses to shoot just with theiriphone pictures of the nurses. and we've actually gotthe nurses in the cases. so again, it's reinforcingthis really tight learning. so you just get acouple of cases a day.

you answer them. and then, overtime, they repeat. and you've got to answerthem twice to retire them. and that's it. it's that simple. and what we're finding is we'rehaving a significant impact certainly on knowledge. there's no doubt about that. price has done a dozenrandomized trials

around knowledge. it's a no brainer with that. but we're startingto see impacts on behavior as well, as i said. so what we did was we developedtwo programs aimed at gps around management ofbreast cancer and diagnosis and referral for lung cancer. it's a small pilot thatwe've done, i have to say. we've only got about 20 peoplein each pilot at this moment,

but it's getting much larger. we're rolling that out to alarger cohort at the moment. what we found, which surprisedme, we ran this over christmas last year, we got over80% completion rates amongst both groups. exactly the samecompletion rate, actually. i felt it was a disasterrunning at christmas. never run anything at christmas. but they kept up with it becauseit runs for about six weeks.

you have about maybea dozen questions. and by the time youanswer them, they roll out over a six week period. the feedback we gotwas interesting. in the breast one onthe right hand side, we had very strongpositive feedback about they wanted to do more. they thought thescenarios were realistic. in the lung cancer one, i thinkbecause we were really much

more specific aboutdiagnosis, they didn't enjoy it as much, whichwas interesting feedback as well. but the last point, which ithink was really interesting, was they still found this reallyneat way of getting information out around guidelinesand change in practice, which i think is oneof its key strengths. qualitative feedbackfrom surveys that we ran, it was a mix of gps and nurses.

this is the typical kind offeedback you get with qstream. some people write backand said, i hated it. they don't like the repetition. you're always going toget that in any course. but generally, we get verystrong positive feedback like this. i won't botherreading those out. most of the criticalfeedback was actually built around thelung cancer cases

where they just foundthem too simplistic. and so we're going back andrevising that at the moment. so in terms of the findings,i think i was really pleased. this was one ofthe first studies i've done in australiawith gps, and it was well accepted by the pilot group. obviously, that's a small group. we've got to do itwith more people. i think it was interesting--and this is something again

as an educator is that youcan have the same program, drop it into differentenvironments, change things slightly and getmassively different feedback results. so i think smalldifferences in the program, you've got to get thecases right to get the program running as well. i think people preferredbreast cancer, as well. i think it was justa more engaging area

than the lung cancer. and i think oneof the key things is i'm just startingthe analysis now on what questionspeople got wrong. so you can startto look and see-- and it's quite consistent. people tended to getcertain questions wrong. we've got to work out whetherit's just a bad question. assuming it's nota bad question,

then you're getting dataaround what kind of education we can develop tofocus on further. so just in conclusion,i think what we're looking at now is howcan products like qstream, this microlearning,how can we get them out to broader groups of gps? again, how can we make thispart of a broader program? so we're looking at lungcancer across a number of sites at the moment.

in my role in the catalysttranslational research center, it'd be nice to putsomething like this in alongside the kind ofsystems change and other things that john and others werereferring to this morning. and that last pointi've just put in there, the boston study i'mdoing at the moment, i threw competition amongstnurses in as an afterthought, and that's what's beingtotally driving it. i've been having nurses--talking to nurses.

we don't identify them. we give them allrock band names. so they'll be like bow wow wow,and pink, and things like that. and actually we're gettingfeedback from the floor that they're all going upto each other and saying, are you pink? you're beating me! and so they're actually havingthis big competition now about how they're answeringthe questions on leader boards

and things like that. so i think we needto think how we use different methodsto engage people in educationalinitiatives and programs. i'll stop there. thank you, tim. any questions from the audience? just a quick one, tim. we're working withthe cancer institute

at the moment developingan e-health system which will try and encouragepatient self management as well as patientreported outcomes driving care, et cetera. can you see-- my mindstarted ticking over whether this might beadaptable for patients in the survivorshipstage of that. i think it's a reallyinteresting idea. when i was actually in thestates, karen, my wife,

is on a fellowshiphere, you know. and there wasanother person who's on a fellowship with uswho's a diabetologist. and we actually createda program for fun for patients about thatinitial period of diagnosis with diabetes andhow you handle that. i think we didn't-- we ranout of time to really test it properly. i think it has hugepotential because it also

pops up in your iphone. it's all the kind ofthings that patients use. yeah, and it's thatreminder function. i think it's a veryinteresting point. great. thanks, tim. so our next speaker isfrom st. george hospital. and it's roslynristuccia and admir, who's going to come up and help.

so this is going to bea prezi presentation. so again, anybody who'sgot vertigo, hang on tight. and so this is, again,it's just following on around nowbringing information to patients and also toreferring clinicians. so this is the clintrialrefer, a mobile application to connect patients withlocal clinical trials. hi. so i'm hoping everyone'spaying attention

because there's going to bea test at the end of this. and i'm going to berepeating myself constantly. ok, and we're also beinga bit innovative here. we haven't got a powerpointdemonstration for you. i'm roslyn from thehematology clinical research unit at st. georgehospital, and this is admir from theconcord research unit. and our passion, it'sclinical research. and it's one of themost effective ways,

i think we all know, toimprove patient outcomes and offer patients abroader range of treatments. but there aresignificant challenges with recruiting patients tohematology malignant studies. because there's often a lowincidence of these cancers, and there's geographicalchallenges in new south wales. and we can't run alltrials at all locations, it's just too expensive. if we want to offer ourpatients all available treatment

options, then we need to referpatients between hospitals. but keeping hematologists upto date with statewide listings of research trialsis really difficult. trials open and closeand change status. and we found thatpatients are still not being offered all theoptions for clinical research that they could be. and the work involvedfor a hematologist to transfer a patient,refer a patient

from one site to another,is really difficult. and it was proving abarrier to some sites. a cultural change was needed. and so technologyis providing us with many new opportunitiesfor innovation, making knowledge management somuch more accessible to anyone. ask someone under 25 a question. they'll pull out their phone. so smartphone appshave the potential

to revolutionize and improvethe delivery of patient care. so the hematologyclinical research network of new south walesand the act decided to develop an app sothat we can create a current, comprehensivelist of hematology trials simply at your fingertips. called clintrial refer,it's free to download from all the app stores, fromitunes, from google play. and it went live in the storesin may, just a few months ago.

and hematologists can even useit during patient consultations and find out if thereare any suitable trials open for their patient. for example, if your patienthas non-hodgkin's lymphoma, you can search the list ofcurrently available lymphoma studies in new south wales. and the patient might beinterested in participating in the australian leukemiaand lymphomas group study called remark.

the clinician caneven email the patient with the trial detailsusing the communication tool that's built into the app. you can copy thetrial registry number and go directly to theanz clinical trials registry for moreextensive information. technical impact. the investigator can alsopre-screen the patient to see if they meetthe eligibility

criteria, the inclusioncriteria, and the exclusion criteria. and they can identify whichhospital is offering the study. and then, they can go andcontact the study site coordinator using theemail or the phone link that's built into the app. so this is enhancingour ability to transfer, to refer patients betweenhospitals easily and quickly. and we have outcomedata since may.

in fact, previousreferral data showed that the state was averagingaround three patient referrals between hospitals per month. that's now climbed toover nine per month in just this short time. that's a 300%increase in referrals. and locally, clinicianscan search for open trials at their own hospital,which is a handy reference tool that is always current.

anecdotally, we're seeinga rise in recruitment within the hospitals as well. and google analyticsis a wonderful thing. clintrial refer has beendownloaded to over 600 users with over 15,000 screen views. and importantly,90% of the app users are repeat uses, demonstratingan engagement with the app. that is, once it's downloaded,they're using it over and over and over.

the feedback from thehematologists using it is resounding. and despite itslocal application, the app has been accessedin over 46 countries. probably, we'll get somecopyright problems there. but really, what i wantto do is to give you some examples of the localreferral system working in just these last few months. this is just a small snapshotof a few of the referrals.

we can't fit them all on. a patient referredfrom westmeade to st. george hospital witha rare mantle cell lymphoma to be treated with[inaudible] on a trial. very few other optionsfor that person. a 67-year-old woman wentfrom gosford to concorde to participate in the dawn studyto be treated with ibrutinib for a refractoryfollicular lymphoma. and the list isbecoming endless.

and we're also seeing otherdifferent uses and flow in benefits from this app. the app is beingused in mdt meetings to make sure that allthe trial options are discussed for those patients. the app is being used byregistrars and residents expanding and enforcingthe trial culture into that level of their career. and we're seeing the app beingused by private hematologists,

and we're seeing referrals fromthis source, which has only been ever sporadic in the past. but now, they have access,we're seeing referrals to our hospital. and now, we're in theprocess of duplication this technology to two otherresearch networks, hematology in victoria, and theadolescent and young adult network in new south wales. and they've kindly agreedto let me show their apps.

they're not live yet, butlook for them next month. and we have interest fromother research groups, as well. since the app is applicable toany research group, not just hematology, it'd be suitablefor breast cancer, lung studies, really anything. and finally, our wonderfulcancer consumer groups are really enthusiasticabout the app. the lymphoma australiawebsite, it recommends the app to its consumers.

and cancer voicesaustralia's representative has actually publiclyapplauded the app for its benefit ininforming patients. so it's free. it's easy to access and localto the hematology network. and to use the app, you don'tneed to be technology savvy. you just need to take thephone out of your pocket. thanks, roslyn and admir. any comments or questionsfrom the audience?

looks less frustrating thancandy crush and angry birds. so i guess just a comment,and professor currow might want to talk. certainly one of thekey aspects of all lhds is to increase the referral ofpatients onto clinical trials. and i think that's a terrificopportunity for people to access it, andyou're already seeing it worked from peoplecrossing great distances. so congratulations on that.

all right, now we'll movetowards collaboration and international collaborationwith associate professor shalini vinod fromthe department of radiation oncology at theliverpool cancer therapy center who's going to be talking aboutrapid learning health care network for prediction ofoutcomes in lung cancer patients. thanks, steven. ok, so i guess in this dayand age, as time goes on,

the amount of databeing generated is increasing exponentially. and we really have anexplosion of evidence to sift through to try and guideour decisions on management. although clinicalpractice guidelines seek to summarize thehighest quality data, these are largely based onclinical trials performed in highly selected populations. and the question iswhether their outcomes

are generalizable to thegeneral lung cancer population that you would see in clinic. so the idea of rapidlearning is learning from each patient treated inthe local clinical environment. patient, tumor,and treatment data are collected, and then analyzedto evaluate outcomes, which can then be used to guidefuture treatment practice. as we've heard today,oncology systems are becoming increasinglyelectronic and paper free,

and much of this datais what is routinely collected in theseoncology systems. certainly in termsof radiotherapy, all the data can be retrievedthrough electronically radiotherapy planningsystems, which have been around for many years. so this rapidlearning environment can be used to generateprognostic tools to guide both clinicians andpatients in making treatment

decisions specific totheir clinical scenario. it can be used to compare theoutcomes of different treatment approaches, and also assessingthe cost effectiveness of treatment. so the aims of the study wereto take a decision support tool developed in the maastroclinic in the netherlands and apply it to patients treatedat the liverpool and macarthur cancer therapy centers. the questions we had was whetherthis tool could be commissioned

in our setting, and whetherrapid learning could be deployed at the centerwith minimal resources. so this is just showingthe decision support tool that was developedat the maastro. this was based ondata collection on 322 patients who were treatedwith radical radiotherapy for non-small cell lung cancer. a complete set of variableswhich could potentially affect survival wascollected and was

analyzed to see which remainedsignificant in predicting survival. any variables that were notpredictive were excluded, and the final result wasa nomogram containing the variables gender, whoperformance status, fev1, which is the forcedexpiratory volume. it's a pulmonary functiontest, pet lymph node station, and the gtv, orgrowth tumor volume. and based on the scores,three risk categories

were identified withdifferent survivals. just wait for thatlittle thing to go. so for this study, we usedour local electronic oncology system, mosaiq, and treatmentplanning system, xio, as the data sources. open source tools were used toextract and analyze the data. and the inclusioncriteria were those with non-metastatic,non-small cell lung cancer planned for a minimum doseof 45 gray radiotherapy,

and patients who had at leastone fraction administered. so there was nomissing data allowed for the outcome variable,which was two year survival, and also for the tumorvolume because the model was quite sensitive to this. missing data was allowed forthe other variables of gender, performance status fev1,and lymph node stations. now, we assumed that the datafound in the clinical data sources was correctand complete.

it's really not feasibleto go back and review this large amount of datato check it for quality. the underlying hypothesisis that the amount of clinical data willcompensate for any data quality issues which mayor may not be present. having said that, we dohave to think of a way to impute missing variables. and it was done ina couple of ways. one was using theinitial maastro data

and using meanvalues of variables to then apply it tothose that were missing. but the other way which isslightly more sophisticated is using this bayesiannetwork imputation whereby you leavewhat you do know to guess what you don't know. so for example, someonewith end zero stage would be assumed to have no petlymph node stations positive. and this imputation does bringup some interesting features

because the patients withlower stage or early stage disease who would be referredto radiotherapy actually were found to have a lower fev1,or poorer pulmonary function and poorer performancestatus than those with a more advanced stage. and the reason forthis is because these are selected patients. they are the patients whoare rejected for surgery, who are not goodenough for surgery,

and therefore arereferred to radiotherapy. so just under 4,000 lung cancerpatients were identified. and this is showing youhow we reduced the numbers. so the green refers to those inwhom we know the data element, or they fit theinclusion criteria. the red are thosewho are excluded on the basis of the criteria. and the gray isreally the unknowns. so we start off withquite a large cohort,

and then we limitit to those who had stage one to three disease. and then, we limitit to those who had a radical courseof radiotherapy. then those again in whomtwo year survival is known and a volume is known. and out of these,just under 4,000, only 174 were actuallyeligible to test the model. interestingly, whatwe found is about half

of our localized lungcancer population who come for radiotherapyare receiving palliative radiotherapy ratherthan radical radiotherapy. and that's beingquite conservative with the minimum dose we chose. so in terms of thestudy population, this table is showing thecomparison between the training cohort of the maastro andthen the liverpool population. similar median age andgender distribution.

the liverpool patientshad slightly larger tumors by 50 cc. the liverpool patients hadslightly less stage 3b and more 3a patients. and highlighted in red arethe missing data elements. so 15% missing ecog performancestatus, 59% missing fev1, and we had no systematic wayof recording the pet lymph node stations, even thoughpet was performed. if we don't record it, wecan't pick it up on this.

so just moving onto the results, when we applied the mastermodel to our local data, we found that the modeldid work, however, only to identify two separaterisk groups one, which was a low risk group with afive year survival of about 40%, which correlates to theblue curve from maastro, and then what maastro identifiedas the medium and high risk here, we didn't finda survival benefit. five year survival,probably 5% to 10%.

and i think itactually corresponds more to the red groupof maastro clinic. and just also forcomparison, this is the survival asper the tnm stage. and i guess there's quitea bit of overlap here in terms of survival curves. and the prognostic modelseems to be doing a better job in displaying or separatingout the survival curves. so although thetnm stage is very

good at predicting overalllung cancer survival, when you hone downto those patients who are getting radicalradiotherapy, then it may not beas discriminatory perhaps as a modelwhich contains variables such as patientsperformance status and respiratory function. so just in terms ofdiscussion, and we found that it wasfeasible to deploy

what we called a rapid learningtool with minimal resources in a busy cancer clinic. and despite the fact thatthere were missing data, and that we did not checkthe data for quality, we were able to applya decision tool which did stratify patients intodifferent risk groups. and this can allow learning inthe real clinic environment. and potentially,it could be used to identify patients orselect patients for treatment,

but also, perhaps, for treatmentintensification, or even de-intensification if wethought the prognosis was poor based on those other factors. so this is still verymuch a work in progress. we plan to use in liverpooldata to improve the model and also apply itto other centers, commencing at wollongong. there are models in other cancersites which we'll also pilot. and that'll start offwith larynx cancer.

and it could alsobe used to apply to other endpoints, suchas toxicities and cost. any questions? shalini, is theresome missing data that you think is the reasonwhy those middle and low groups don't separate? ok yeah, we havethought about this. and i think what'shappening is that we are more likely to treatsome of these patients

with palliative radiotherapythan maastro were. so i think our coreprognosis group is more likely to betreated palliatively and out of this cohort. so we're choosing the very bestof the poor prognosis, which is meeting the medium risk. i think about 15%off our patients fell into that category. but for maastro,it was about 25%.

so it's probablypractice differences. yeah. so again, important, looking ata benchmarking between centers and also giving patientssome real world experience on their likely outcomes froma treatment that we recommend. so we're going fromaround the world back to what [inaudible]would probably call god's country, the southcoast of new south wales, near to where i live.

and so we've heard alreadyfrom illawarra shoalhaven about their emr. and so now we're going tolook at some of the quick wins that they've gotfrom implementing their local information systemto rapidly learn and inform care at illawarrashoalhaven lhd. good afternoon. so in january, 2011 illawarrashoalhaven cancer services outpatients went live with anoncology information system.

this, combined withthe recent roll out of the oncology informationsystem into the inpatient unit provides a service with acomplete electronic record for all cancer treatmentwithin the district. today, i'd like to talk aboutthe use of data generated by this electronic as a sortof micro rapid learning system. i'd like you to think of theentire rapid learning system process as a tree withrapid learning concepts at the foundationof the process.

and then we're sort of wateringit, feeding it data, and then, hopefully, picking thefruit of this process, or the sort of improvedquality of care. so last year at thisconference, amy abernethy presented on the developmentof rapid learning systems. and in the oncology settings,it's pretty exciting talk. one of her colleagues,lynn etheredge, defined rapid learninghealth care as a model, one, that generates as rapidlyas possible evidence needed

to deliver quality patient care. in this model, learnersuse as much as possible, as soon as possible forthe collection of data at the point of care. it can then be used to informclinical care and service delivery. asco's recently developed aproof of concept rapid learning system called cancerlinq. its vision is toassemble and analyze

point of care information in acentral knowledge base, which will grow smarter over time. specifically, the system aimsto do the following points. and asco has been ableto show proof of concept in the breast cancer setting. but it's still early days formany issues left to iron out. this slide is justhighlighting, visually, the concept of cancerlinq. and i guess, inaddition to cancerlinq,

there's the exciting prospectsthat were just discussed. by dr. vinod today incollaboration with dr. andre dekker inthe maastro team. they're actuallyshowing the power that a large amountof data can deliver and are already performingwithin the rapid learning system. on top of thesegreat examples, we've got the use of ibmsupercomputer watson.

i'm not sure if anyone'sfamiliar with watson. it won the american gameshow jeopardy against two of their champions in 2011. watson is now being usedas a clinical decision tool at memorialsloan-kettering and has ingested more than 600,000pages of medical evidence, and more than 2 millionpages from medical journals. this is combined withan ability to aggregate all this information with over1.5 million patient records.

applying this sort of power andapplying it to a rapid learning framework provides hope andexcitement for the future. whilst this rapidlearning model is being develop onconcept of a big data, it's also possible to applythese concepts to a localized level, with the aim ofachieving similar outcomes. so what was at the coreof our district's use of rapid learningsystem at a local level? fundamentally, we're miningdata routinely generated

in the course of routineclinical practice held within our system. clinicians withincancer services identified a selection ofprojects to be investigated. the aim of theseprojects is to inform patient care andservice delivery. and whilst theuse of data is far from a grand scheme ofrapid learning system, data stored in theoncology information system

provides opportunitiesfor the data integration in support ofhypothesis generation. and it also enables us tomeasure the effect of changes in real time, speedingup the process of change management andpractice improvement. how do we gain this data, andhow do we ensure its quality? as my bank tellsme, mr. bell, you can only pull outwhat you've put in. we need to ensurethat the quality

and completeness of data. we can do this ina number of ways, including the use ofmanual and automated qas. for example, anautomated report right at the end of a clinicto inform the conditions of uncompleted data fieldsin a follow up assessment that provide statusfor overall numbers. additionally, there'sregular staff training, education, support.

we're also workingtowards making it easier to enter the data as it can beperceived as time consuming. our current softwareis mouse-based, and we need tochange this software to be less mouse-centric, amore interactive product that's easily used from mobiledevices-- as over half of our data entry is nowfor our mobile devices. automation of parts ofthe clinician's process has reduced the time spenttranscribing treatment

information and enabledmore time to pay attention to data entry and rewardedcomplete data entry. so just an exampleis the nurses used to have to transcribe whatthe patient's treatment had on into a sort ofpurple book, which the patient tookwith them everywhere. it was time consumingand risk laden as well. now, if the vendor rolledthe data into the oms, it's an automated report thatgives a much better picture

for the patient and for thepeople involved in their care. we've also embedded dataentry into the workflow and made data pointsmandatory to insure completion of data-- intent field,dose reduction fields, are all now mandatory. so we've really tried tolock down our data entry. so now we have ourdata, or nutrients, what fruit are we able harvestfrom our rapid learning tree. first, we just go touchon right to the phlebitis

due to dacarbazinewithin our service. whilst there is discussionon phlebitis being caused by the infusion of dacarbazinein clinical trial literature, there are little to nopublished data on the incidence rates of phlebitis in practice. despite the lack ofreporter frequency, many units have eitherincreased infusion time links, and/or run concurrentfluids with the drug as, anecdotally, theyhave noted, increased

phlebitis rates. through the data enteredwithin our system, we are now easily able toidentify all patients who have either had abvd, ordacarbazine single agent. and if there are any reportsof acute or delayed phlebitis. our service, beingable to report that acute phlebitis occurredin 4.5% of all their infusions, and that delayed phlebitis wasreported in 6% of infusions. off the 14 reportedinstances, only one instant

was related to singleagent dacarbazine. females and patientsunder the age of 65 were most likely toreport phlebitis. so how do we act on this? our current infusion ratefor dacarbazine is two hours. due to these results,we're looking to reduce the rate ofsingle agent dacarbazine to the eviq recommendedtime length of one hour. now this will reducethe time spent

by the patient in the unit andfree up treatment chair time. we're also now more alertto monitoring young women receiving abvd for signsand symptoms of phlebitis. further projects embarkedupon in this rapid time frame, especially whenwe're comparing it to going through theold paper version, was a comparison cost betweenconcurrent infusional 5fu and the oralalternative capecitabine for locally advancedrectal cancer patients.

we noted anecdotally thata number of complications are related to centrallines and postulated that the overall costof 5fu would be greater than that of ourcapecitabine, or the unit cost of our old capecitabine. again, we're able toeasily and swiftly identify all suitable patients andidentify rates of central line complicationsexperienced by patients. this data was combinedwith a cost analysis

of their resourcesrequired for central line insertion and maintenance. and here, you can seethe calculated costs for the threealternate pathways. total cost of capecitabineis about $700. for 5fu with the picclines, about $1,000. and then with a port, youstart getting to over $2,000. so again, we're ableto project a total cost saving for the serviceof around $12,000 a year

for the district--so whilst we're reducing morbidity relatedto centralize complications. indeed, the service isable to demonstrate, if the majority of suitablerectal cancer patients were commenced oncapecitabine, instead of 5fu, across the state, it wouldresult in a projected cost saving of $224,000 for thestate of new south wales. a project that demonstrates,most comprehensively, the application of a rapidlearning system to a localized

setting is our currentproject on scheduling within the oncologyday care unit. this project, unfortunately,wasn't funded by cancer statue. i'm not bitter about it. [laughter] the unit noted that patientscheduled appointment times consistently ran over. the time schedule werebased on historical values and those suggested in eviq.

the inconsistentscheduling appeared to be greater with certainprotocols, certain cycles, and, possibly, certaintypes of patients. the end result of theinefficient, incorrect scheduling was extendedwait times for patients on their day of treatment, andincreased stress for staff, just like it is onme at the moment. so the use of an oncologyinformation system to its full functional providedus with such intricate data

that statisticallysignificant conclusions were able to be released. we have over 6,500 occasionsof service a year to analyze. time stamps within theoms provide granularity that can be mined toprovide a thorough picture of the treatment timejourney for a patient through the cancer care center. so we have a look at it. this is a reportthat we've developed.

you can see that the patient'sbooked in for a 1 and 1/2 hour appointment forpaclitaxel weekly. they were due in at 2:00 pm. they arrived at 10 to 2:00 andthen came in at 12 past 2:00. the first observationby the nurse was attended at 18minutes past 2:00. and they finallychecked out at 4:10. so you can see, the patienthad their first drug at 2:10 but then didn't have thenext drug until 2:37.

so it's quite a delay betweenthe [inaudible] and starting. i mean, there's cannulation andobservations to be done there. but here is an areathat we can look to start to improve uponand, hopefully, measure very rapidly, if we domake an adjustment, if it has an effect. so you can just to get agood overall picture of when the patient has theirfinal treatment, when the treatment finishes, andthen when they are kicked out

the door at 10 past 4:00. so it's just a reallygood overall picture, with a lot of granularitythere to start to analyze. so analysis of data collectedidentified discrepancy between currentappointment time links, again, referencing eviqsuggested times, and real time appointment timelines. on average, the scheduledappointment times were out about 43minutes per care plan.

this becomes a significantissue if you're seeing 25 to 30patients in a day. all this data thatwe're able to gain will personalized scheduledpatient treatment times, aiming to improve patientsatisfaction, their treatment experience, patient safety,and staff satisfaction. additionally, we will be able toidentify the major bottlenecks in the patient journey,implement quality initiatives, improve inefficiencies,and measure

the effect of the initiativesin real time data. so in conclusion,all these projects need not be limited tothe localized project. the same data can beused to contribute to much largerresearch projects, should we share the samereports with liverpool or coffs, if we're all using thesystem in the same way. and we can also benchmark. so is it-- on average, for usto be 43 minutes over time,

is that normal, or isliverpool and cameltown performing much betterand actually on time. the beauty of having the datacollected at a localized level enables us to identifypatterns in trends rapidly in order to generateand test new hypotheses. through a retrospectiveanalysis of real time data in a rapid learningmodel of care, we've been able to insure afocus on continuous innovation, quality improvement, and patientsafety within our service.

thank you, [? brian ?]. we've got time for anyquestions anyone's got. so i guess the goodthing about that is that you've got reallife, real time experiences, as opposed to the some of theguidelines that we've got. are you in a positionto sort of ask eviq to change the timings ofsome of those protocols yet? or a bit more data is needed? no, i think we'vegot enough data.

i guess it would be nicefor eviq to have data from just than solely one unit,perhaps two units or something, so that we could startto change those things. but it is definitely out. if you look at aneviq care plan, and how much time they want toassign to it, it's very much, the infusion is going totake exactly one hour. and other stuff is onlygoing to take half an hour. and sure, the infusionis going to take probably

an hour and five minutesinstead of an hour. and then antiemetics takeout extra two minutes. and you start addingthem up, and it starts to be a big disparity. so yeah, i thinkwe've got the data. we're now going to change allof our appointment time lengths to reflect that. and then we're going tomeasure the effect of that. yep.

question in the front? [inaudible] badu fromst. george hospital. that was very interesting. is the software adaptable toa tablet format for nurses to keep by the bedside. yeah, so that's the idea. the software is mouse-based. our idea-- we've justhad a proof of concept done with a mobile device forpatient reported outcomes.

but transferringit to clinicians like nurses, for themto be able to have an app on the frontend of mosaic, to actually enter the data in. and then it dumps across into[inaudible] into the system. it will make it a loteasier to enter the data. so yeah, far moreusable, i guess. all the systems are beingdeveloped to be at a desk. and we're not usingit at a desk anymore.

we're using it atthe patient bedside. so we do have that ability. and that's what we're looking todo in the next couple of years, is really try and make it alot easier for the clinicians to use. the other thing ifound very interesting is your economic analysis tool. i think we tend tounderestimate human capital. that integratesthat much better.

is that fully integratedinto your software? no, you have to pull that out. the easy part is that you'reable to pull it out rapidly. like i can sit and get thatsame data that would've probably taken me weeks to get,i can do that in an afternoon now. so we can look at allthose patient records in an afternoon, meaning anothernurse then could have a look and say, ok, here areour complication rates.

and this is how much it costs. so it just enables youto do it a lot quicker. and the turnoveris a lot quicker. thanks bryan. thanks. so i think we've heardfive excellent talks that cover a fair spectrum of theuse of data and technology, from individual patient care,to ways of improving teamwork and efficiencies incenters, and then

bringing out services topopulations and communities. so i'd like to thankall of the five speakers for their excellent talksand keeping to time. i'm sure they'll be opento individual questions over afternoon tea. don't forget you've got yourevaluation forms to fill out before the end of the day. and we're back in half anhour for the great debate. so thanks for your attentionand thank the speakers again.

No comments:

Post a Comment