>> coordinator: welcome andthank you for standing by. i'd like to inform all partiesthat your lines have been placed in a listen-only mode for theduration of today's conference. today's conference isalso being recorded. if you have any objections youmay disconnect at this time. i would now like toturn the meeting over to ms. diane st. germain. ma'am you may begin. >> diane st. germain: thank you.
hello, i'm diane st.germain, a nurse consultant in the division ofcancer prevention at the nationalcancer institute. on behalf of the nci and tomanagement quality of life steering committee and theinternational society for quality of life research. i'd like to welcome you to today's webinar symptom management concept design. before we begin i'd like togo over a few housekeeping
details with you. to see the slides you will needto have microsoft's live meeting installed on your computer. you - if you have not donethis yet please refer to the confirmation emailor calendar invite for the links to download. live meeting willwork best if you close all other applications. you will hear the audio part ofthe presentation by telephone
at the number listed inthe webinar invitation. you cannot hear itthrough your computer. if you need technical assistanceplease dial star zero. the webinar is being recorded. all lines are mutedexcept for the speakers. the presentations and theslides will be posted and archived on severalnci and isoql web points. information will be sent oncethey are posted which will be approximately a monthfrom today's event.
you may use the question box onyour screen to ask questions. you may type your questions atany time during the presentation and we will read them aloudduring the q and a period at the end of today'spresentation. you may open the q and a box byclicking on the q and a option at the top navigation onyour screen and dragging it to the right-handside of the screen. the objectives for today'swebinar includes too identify the required elements includedin the symptomatic concept,
to identify key statisticalconsiderations for a symptom management concept, and lastlyto describe issues to consider during concept developmentfrom a community physicians perspective. each speaker will introducehim or herself prior to their presentation. i will now turn it over to ourfirst speaker dr. michael fisch. >> dr. michael fisch:thank you diane. good afternoon.
my name is dr. michael fisch. i'm the chair of the departmentof general oncology at md anderson cancer center and i'mco-chair of the nci symptom management and quality oflife steering committee. my job to get us started is totake you through an overview of this concept design. i'd like to start by providingsome overall context by going over the organizationalstructure of the nci community oncology researchprogram called ncorp.
so in the yellow box inthe middle you'll see that there are two basiccomponents to ncorp. they are the communitysites and research base. the community sites are involvedin facilitating enrollment to cancer prevention and controltrials and working closely with research bases which are focusedon the design and conduct of those trials as well as the datamanagement and the biostatistics and dissemination pieces. also involved are severaldivisions of the national cancer
institute including thedivision of cancer prevention, the division of cancer controland population sciences, the division of cancertreatment and diagnosis. and also one center, the center to reduce cancer health disparities. closely linked are the nationalclinical trials network where the treatmenttrials are conducted. and you can see the cirband (unintelligible) are involved as well.
this program is intended toinclude extramural investigators from nci designated cancercenters and other academic centers including other kindsof research organizations. and the concepts that are putforth through the research bases are evaluated and prioritizedthrough the steering committees. and in this case there is asymptom management and quality of life steering committee thatis in charge of this aspect. these steering committeesare coordinated through what we call ccct or the coordinatingcenter for clinical trials.
the responsibilities ofthe steering committee are to prioritizethese trial concepts. and also to do some other thingsincluding convening state of the science meetingswhere critical questions are identified and prioritized,developing context rather developing concepts for certain new trials utilizing task forces when appropriate,and periodically reviewing accrual and unforeseen implementation issues. the major goals of the steeringcommittee are to evaluate
them prioritize thetrials that's number one. but in addition to provideexpertise for the disease specific steering committees. so for example there is abreast steering committee, and a gi tumor steeringcommittee and other steering committees where quality oflife expertise is sought and is delivered throughthe symptom management and quality of lifesteering committee. another major goal is toincrease the availability
of biologically plausible andfeasible interventions to reduce cancer treatment toxicity anddisease related symptoms. the work of the committee issort of organized through a constitution or standardoperating procedures. and this is something that wasdeveloped by the committee in the spring of 2010although the committee itself was first got started in 2007. the logistics of the committeeare that there are a chair and co-chair, there isinvolvement of the different
divisions of the nci, they arerepresentatives of the funded research bases, thereliaisons both for the disease specificcommittees and also the investigational drug steering committee. and again this whole steeringcommittee process is logistically supportedthrough the ccct. it involves teleconferencesfor concept reviews. periodically there are faceto face meetings as well. and from time to time thereare organized clinical
trial planning meetings. and these are generally meetingsorganized around specific topics such as fatigue, or chemotherapyinduced neuropathy, or that sort of thing. when the nctn reviews the workof the symptom management and quality of life steeringcommittee and the other steering committees for that matter thereare certain criteria as they look through the whole bodyof trials that were approved. they're looking to see thatapproved and conducted trials
are feasible, that they'reclinically important, that they're making ascientific contribution, they're looking at the cost andresource commitment involved and in particular the uniquesuitability for the nctn. so sometimes there is researchthat is considered appropriate and strong researchbut it may or may not be considered uniquelysuitable for this landscape. when investigators are puttingtogether trials in symptom management and quality of lifethey are really eight broad
what i call key choicesthat investigators make. first targeting deciding whatthey're going after and secondly deciding whether the goalis to treat that symptom or to prevent it or somecombination of the two. third would be citing of aboutthe population involved and whether they casemix of enrolled patients is to be something fairlybroad or more narrow. so an example of broad would bepatients with advanced cancer. something more narrow might bepost treatment survivors
after breast cancer treatment. also a decision about whether adrug or nondrug intervention is to be used. and amongst the druginterventions whether something like a naturalproduct would be used, or a repurposedfda approved drug, or some other new targetedagent specifically going after the symptom in mindor toxicity in mind. choices need to be made aboutuse of patient reported
outcome measures amongst thevalidated measures that might be available and other validatedtests or objective tests, the timing of the assessmentis an important choice, there is a decision aboutwhether or not to add correlative science elements tothe study and whether they would be integral orintegrated into the design. and finally the overall studydesign which might range from a feasibility study to aplacebo controlled phase 2 or even a phase 3 design.
so a concept is basically aphase 2 or phase 3 study of at least 100 subjects. again it's developed through theresearch bases but many times the lead investigators arecoming from other academic centers or ideas are comingfrom ncorp sites what have you so collaborationsoutside of these research bases are encouraged. the page limit is ten pages. but it is recommended that theseconcepts be five to seven pages.
things like consent forms incase report forms do not need to be included with the conceptshowever copies of all patient reported outcome instrumentsdo need to be reviewed by the steering committeein order to appropriately judge the concepts. the rationale involves statingwhat the conditions to be studied is and that it's acommon condition of great enough importance focusing a backgroundrationale on mechanisms that are likely to berelevant is very important.
and comments about thefeasibility and how likely it is for an intervention tobe adopted by patient's needs to be supportedin the rationale. and really make in the case ofwhy this is an intervention uniquely suited to thisparticular landscape. the concepts involved aliterature review that needs to be up to date. it needs to be highly relevantspecifically to the proposed intervention in theproposed populations.
results from previous study needto be reported studies need to be reported accurately. and the scope of all relevantdata should be included including some data whichmight not fully support the intervention. in logistical details likematching the reference to the text properly isimportant so that the steering committee understands andproperly reviews the concept. the objectives and thehypotheses are critical.
they need to be clearly stated. and all primary and secondaryobjectives in the hypotheses need to be outlined. these primary and secondaryobjectives statements tend to foreshadow the patientreported outcomes that are going to be used and thedegree of change that is going to be considered meaningful. there needs to be fidelityto these objectives in other sections of the concept sothat the whole document
is internally consistentand clear to the reviewers. so now i will introducedr. watkins bruner. >> dr. deborah watkins bruner:good afternoon. my name is deb bruner. i'm professor at emory andassociate director for outcomes research in the winship cancerinstitute along with dr. fisch. i co-chaired symptom managementhealth related quality of life symptom management committee forsteering committee for the last six years and have recentlystepped down from that position.
i'm going to be talkingabout -- i don't seem to have control diane. >> diane st. germain: okay i canadvance your slides for you. >> dr. deborahwatkins bruner: okay. next slide please. today i'll be talking abouta study design and some of the successful approacheswe've seen in getting concepts approved throughthe steering committee. so successful concepts haveclearly have common elements.
and dr. fisch has already begunto describe some of these in terms of consistency. so the concept will have aclear and consistent focus beginning with the title throughthe background rationale and the objectives. the measurements or metrics mustclearly map to the objectives. and as rational as that soundswe find a lot of issues in that regard. the intervention has to beclearly explained including
the length of the intervention. and the statistical designmust also like the metrics map to the objectives. and it must be appropriatefor the metrics involved. the evidence to support thestudy needs pilot data to give us the prevalence of theevent symptom or toxicity which is the target. now there are two things thatneed to be stated when we're talking about themetrics of the outcomes.
we generally see thatinvestigators do a very good job on the first oneyou see listed here. we generally have investigatorstell us by how much an intervention will bestatistically significantly different from the control arm. however on recent review by thenational clinical trials network what we found is that oursymptom management trials have largely been lackingthe second step. and the bar is being sethigher so this is an extremely
important topic and very timely. not only is it important thatinvestigators tell us the statistical difference on somemetric so for instance that a patient reported outcome willhave a five - or a five point difference between arm one andarm two and that they propose that that will be statisticallysignificantly different. we almost have to - we also haveto be told or understand how that difference that five pointdifference will decrease the symptom event or thetoxicity and by how much.
then all of this is neededobviously for the sample size and dr. naughton willspeak more to that. other issues that have tobe addressed are issues of treatment noncompliance ornon-adherence and previous problems with accrual inrelated or similar studies. if we recall the definition ofinsanity is doing the same thing over and over expectingdifferent results then we can propose to you thatwe frequently have an insane world that proposes the sameaccrual plans that ran
into trouble in similarrelated studies. frequently we see thatinvestigators will tell us in the cooperativegroup that yes they were accrual problems, theycouldn't meet their accrual in a timely way or even hadto close the study early. and that the plan is that theywill have the same accrual process by sending out thestudy to their members and maybe emailing them alittle more frequently. we propose that that isnot enough especially
at the national level. we - the bar ishigher here as well. we expect a significant planto overcome non-compliance or treatment non-adherenceand also accrual problems. the study needs to account forother treatments patients may be taking that will affectthe endpoint of interest. blinding is always importantespecially when we have patient reported outcomesand subjective endpoints. now for historic controlswe don't usually see this
obviously in phase 3 trialsbut it is an issue especially in randomized phase 2s. this has been common in the pastbut is becoming more difficult to justify with the frequencyof changes in treatments. so time lag is a problemwhen you look at historical controls for comparisonto the experimental arm. and there are usuallydifferences in population these days for many reasons. so it's not to say that historiccontrol arm would never be
approved but again there wouldbe a significantly higher bar and significant rationalerequired to use that in a randomized phase 2. now successful concept designsacross the phases we don't do a lot of phase 1 trialsin symptom management but we do some. i'm not going to spenda lot of time on this. the endpoints would be similarto treatment trials where we would be looking at toxicityand safety issues feasibility
those finding for the phase 2. but if we do a phase 1 it isincumbent upon the investigators to clearly walk us through howit would lead to a phase 2. i'm going to spend a littlemore time on phase 2s. those investigators on the phonewho have been at this a long time we've frequently feelthat we know the landscape and have been doing this andhave great expertise. the issue with that is basedon the nci portfolio review we can say that the landscapeis changing even for those
of us who have been at this along time and are experienced. some of the ways it's changingis resource used for phase 2s. it is extremely important thatevery single endpoint and every single metric on a phase 2and a phase 3 be justified. in days gone by we used tocollect a great deal of data. and sometimes phase 2s if weretreated almost like a phase 3 in the amount of datathat was collected. given limited resources wecan tell you that that cannot be the case moving forward.
that phase 2s must be welljustified in every single endpoint, every single metric,every timing of the endpoint so gathering data over 15 timepoints but only using the change score between baselineand one year does not fly anymore in review. you have to walk us through whyall those other time points are necessary and statisticallyhow they would be used. not only that in a phase 2 ithas to be clearly defined as how each one of those data pointswill help in a go no go decision
for the planned phase 3. so to reiterate the daysof collecting lots of data and hoping we will analyzethem somewhere down the road but we have a primaryendpoint of say free to post is difficultto justify these days. each endpoint, each metric,the statistical design for all must mesh, be consistent,and be justified. now phase 3s. for phase 3s again we seea bar that's climbing.
we have in the past done symptommanagement trials where we have not had good preclinical data onwhy an intervention would work. we have anecdotal evidence thatit makes people feel better or seems to modify certainsymptoms or toxicities. but previously we frequently didnot have preclinical data on the biological plausibility of whythis intervention would work in this setting in this symptom. that is getting harderto justify as well. we understand the challenges ofobtaining preclinical data
in the symptom managementsetting and yet this is more of a concern at the nationalcooperative group level. so we may be using otherportions of our network including our cancer centers andthe r01 mechanisms to get some of this biological data but atthe national level biological plausibility is becominga requirement in talking about study design. we also need to understand thechoice of dose and the duration of the intervention.
again pre-pilot data isrequired for phase 3 studies. proposed intensity of anintervention is also required when we're talking aboutthings like exercise or yoga. a side effects profile forour intervention is extremely important toxicity and safetynever take a backseat. interventions also maybe more beneficial to a subset of patients. we frequently have all beenthere and understand the issues of trying to obtainample sample sizes.
but we need good rationale forbroader sample specifically if we see in the rationale thatwe think that the intervention benefits a subset of patients. so when we design a phase 3 itis extremely important again at this point to point outthe difference between national cooperativegroup trials and ro1s. in ro1s in our own program ofresearch our phase 3 trials may lead to future research. but at the national level phase3 trials are more and more
being looked at not for howthey lead to future research and how they will improvepatient outcomes. how national resourcesare being used in phase 3 change clinical practice. again the bar is higher. issues of eligibility whentreating a condition the population should actuallyhave the condition and a level of the condition thatwould allow us to see a variance in response.
i know that again itsounds like common sense. but i have to tell you at leastin my own research and my own research group just by studyinga condition we have had the good fortune of seeming to cure it. once we make a goodrationale for a study, talk about the prevalenceof the condition, we have got to study thingsand then it seems that none of the patients have it anymore. by magic we have cured it.
we've had to change theeligibility criteria, lower the level of the conditionto get patients to accrue, and then where troublewith variance in response. we find that this has happenedmostly because we have based the prevalence of the conditionon literature reviews. it's becoming more incumbentupon national cooperative group trials to have preliminary datanot based on the literature or aggregates of multiple studiesbut based on prior work within our own cooperatives withthe patients to which
we have access. and that gives us a morerealistic rationale for the condition and theprevalence and the level of the condition in the patientsto which we have access. again similar to what we'resaying with the condition if we are preventing acondition then it is clear that the population should not havethe condition if we're trying to do prevention studies. and again we just talked aboutlimiting the population to those
who would likely have benefit. so we have an issueagain with accruals. so we frequently seeheterogeneous populations to boost accruals. however is the benefit reallyin a homogenous population a smaller population we need toattend to that in the rationale. in choice of endpoints againwe want endpoints with high prevalence in the populationsto which we have access, high severity, or thatare highly important
to patient's wellbeing. we have to clearly definethe endpoints and how they will be measured. and we need to choosethe best instruments for measuring endpoints. this isn't as - alwaysas easy as it seems. and all of us on the phone areprobably quite aware of that. we look at whether all thequestions map to the objectives to the condition for whichthe population is to have.
and if the measurements are ableto realistically be able to be standardized across sitesfor instance neurocognitive testing that mightrequire credentialing. we also clearly look at thetiming of the endpoints. again back to the point i madeearlier the days where we would have 15 endpoints but astatistical design that did not include all of thoseendpoints are gone. we must have a statisticaldesign that uses every measured endpoint and makes it relativeto the decision making in either
clinical trials guidelineor future guidelines for patients or for movingon to the next study. we need to take intoaccount the natural history of the condition andthe mechanism of action and we clearly need torealistically account for potential loss to follow up. further endpoint considerationsinclude making sure that study endpoints are timing is clear. usually we like to see abaseline in relation
to diagnosis or treatment. and then timing assessmentsas follow-up should be in relation to a fixed point. patient reported outcomesalthough we're speaking to the choir probably on the phone weall believe in them may not always be the best or at leastthe only relevant outcome. for instance in pain it maybe a dual outcome where you're looking at patient reportedoutcome of pain with a co-primary that looks at whether the pain medication
on the intervention has increased, decreased, or remained stable. there are significant problemsthat i think we have mostly gotten away from this issue withthe ctc as a primary endpoint. the nci does not encourage this. it is a good measurementof adverse events but not a good primary endpointbecause it is not sensitive, has not been validated, andalso double barrels questions meaning they are multiplecomponents to one question.
we need to explainthe use of all data. i've said this over and over. and obviously i'm saying thisbecause it's one of the common issues we see in trialsthat come forward. and all measures must beincluded in the analysis plan. the measurement toolsthemselves need to be explained well defined. they need to bevalidated and validated in a similar population.
so that provides issues that mayneed pre-pilot work because we still occasionally seemetrics coming forward that were not validated in thepopulation of interest. and we need to ensure thatthe primary endpoint maps well to the metric. so for instance if weare proposing to look at chemotherapy induced peripheralneuropathy and we propose that the primary endpointis pain but then the metric is a mixture of things likepain, numbness, tingling,
quality of life, social wellbeing we would find that a problem on review. the endpoints must clearlymap to each objective. and my final slide talksabout this mapping. years ago when i was in doctoralschool we actually had to go through actual visualslike this to map our conceptual framework to ourobjectives, to our metrics and to our analysis plan. what we find is investigatorswho are not doing this either
mentally who are actually onpaper find a disconnect between these issues. so to simplify this i'mgoing back to how important this mapping is. i'm using the example of pain. here we're just talkingabout mapping the objective to the metrics. if we have an appropriate metricfor pain on the left we may include things like frequency,severity, duration, intensity.
we may look at it as aco-primary with or without or the stabilizationof pain medication. but if we say that the primaryobjective is pain but then include a metric that may bemore appropriate for secondary endpoints that includes allthe other things you see listed that would giveus pause at the review. and much of this can bediscussed further in the statistical analysis section. and i will turn this overto dr. naughton for that.
thank you. >> dr. michelle naughton: hi. this is michelle naughton. i'm a professor atwake forest university. i am one of the co-vice chairsfor the health outcomes committee for the alliance. and i hold a similar role in thewake forest comprehensive cancer center researchbase in winston-salem. i am joined here why mycolleague dr. dodd who
is in the divisionof cancer prevention say hello dr. dodd. >> dr. kevin dodd: hi everybody. >> dr. michelle naughton: dr.dodd and i are going to take you through some of thestatistical aspects of symptom management and quality oflife on concept proposals. i'll run you through the slideand dr. dodd and i will both answer yourquestions at the end. at the concept stage it'simportant to realize that
the importance or purpose isto describe the statistical methods to be used toaddress the primary aims. and basically in our view thestatistical section is where everything all comes together. all the things that dr.fisch and dr. bruner have talked about beforethey're all come together. and hallmarks of a successfulconcept are really well defined and justified specific gains. they're measurable andvery importantly they
have clinical relevance. you have a study design andmeasures that are appropriate to those aims. it's feasible to conduct thestudy in a community setting. and this is very important. your power calculationsare included and they're accurate and alignedwith the primary aim. and you have statisticalanalyses for your primary aims detailed in the concept.
and you also have somediscussion of some of the data challenges that you mightencounter in your study and with the data you'veproposed to collect. all successful concepts andproposals really are the result of a well-defined study team. you may have teams ofclinical investigators, symptom management qualityof life experts and so on. but statisticians in particularreally should be included right at the very beginning of theconcept development phase.
unfortunately what happenstypically is statisticians might be brought onmidstream or toward the end of the developmentof the concept. and you really should availyourself of the statistical expertise early onright at the beginning of the concept development. this particular slide is meantto be kind of a checklist or a guideline for you to use inwriting the statistical section. we're not going to go throughevery single bullet point
on here because that will followin the subsequent slides. but you can use this more as achecklist to go back and refer to when you're putting togetheryour concepts for what to include in thestatistical section. so let's go on and let's look atsome kind of helpful hints for your specific gains. some of the more successfulconcepts that we see are those that limit the study to one tothree primary aims that are focused, justified, andimportantly measurable.
you want to make sure that youpower your study for all of your primary aims not justone not just two but all of your primary aims. similarly you want to havewell focused and a limited number perhaps one tothree secondary aims. it's also important to indicatewhy your chosen secondary aims are secondary versus primaryaims and also why they're important to include at all. and from a statistical point ofview it's very important to make
sure that unless these secondaryaims are exploratory that you have some study power leftto test these aims after you're adjusting formultiple comparisons. in terms of your data analysisyou are going to need in the concept proposal ananalysis plan for each for your secondary aims howeveryou only need to describe your general analysis plans forthose aims that perhaps might be covered in maybeone to two paragraphs. the primary aims are of focusin the concept proposal.
the statistical reviewers aregoing to look for certain things when they'rereviewing the sections. and one of the primary focuseswould be to look to see if investigators have anunderstanding of the complexity of the study data that they'regoing to be collecting. they're going to be looking forsuch words or buzz phrases such as how you're goingto handle missing data, multiplicity, blinding, datatransformations and so on. and just a couple wordsabout data collection.
most likely these types of howyou're going to collect the data of the metrics are going tobe used will be covered in the methods section. but what we need to know forthe statistical section is the feasibility ofcollecting the data using the methodology you proposed. and why you need that is that wehave to have estimates of how many completed surveys,samples whatever it is at each time point you areexpecting to complete.
so we need to know what theattrition might be across the data points. this would influenceyour analysis plan. but it also has relevance foryour sample size in terms of what you're expectedattrition may be during the course of study. dr. dodd and his colleagues atbcp and (ctav) put together a list of some of thecommon problems they see in concepts they review.
some of the most common arepower calculations that are included that aremissing or not included or they might be includedand they're inaccurate. you might have a powercalculation that included for only one versus allof your primary aims. and that's very importantto make sure that you have sufficient power for allof your primary aims. another problem might be youlist multiple secondary aims but you really don't talk about itif you have study power that's
sufficient to study those aimsparticularly after adjustment for multiple comparisons. you might also see or kind oflike a laundry list of secondary aims that are included ina concept but they have very limited justification. and as dr. bruner just mentionedyou really now need to make sure that you are not proposing tocollect any kind of data or have any type of aim unless youare - you can tie it back to clinical relevance,good study justification,
and have analysis plans of whatyou are going to be doing and how you're going tobe analyzing that data. other problems could be thingsrelated to data analytic techniques that are notappropriate or they're not really optimal for the specificgains in data to be collected. you may also have no discussionat all of any of the potential data challenges and howthey will be handled. and in any study you're going tohave potential data challenges. and so it is importantto indicate how those
will be handled induring analysis. and one of the pet peeves amongthe statisticians is when investigators take a continuousmeasure and they turn it into a categorical variable whichreally just truncates the reins and eliminates data or cases. an example of this might be ifyou have a single item measure that asks the patient to sayhow would you your overall quality of life on ascale from zero to ten. and you just decide for analysispurposes that you're just going
to look at people score like afour and under or five and under and so on instead of usingthe entire continuous measure so just somethingto keep in mind. and next kind of commonproblem is looking at what's a clinicallymeaningful difference. and this is really a nemesisfor a lot of us who do symptom management and othertypes of clinical trials. oftentimes what is seen ina particular trial is that investigators will put in thatthey expect to see a half
of standard deviation unitchange and that that will be clinically significant. and the issue is why? it really is a sufficient tojust put in the half of standard deviation in the analysisplan with no justification. for example did a prior studyuse this unit change based on some empirical work? did you have some pilot datafrom which you got this half of standard deviation?
now there are certainly manypatient reported outcome measures that have beencalibrated to a half a standard deviation. if that is the case then youneed to cite references and other support for that. it could be the case too thatyou might have a secondary aim where you're actually lookingto explore what the clinical significance might be. and you might be collecting datato calibrate a measure with
some clinical outcome inthe study you proposed. indicate that in your studyand also how you plan to analyze those data. but more often than not many ofus don't actually know what a clinical meaningfuldifference might be for certain types of outcomes. and if you don't know behonest and provide a rationale for the units of changethat you have chosen but do not leave it out.
you need some type of ajustification for this. and just a note, in general, erron the side of explaining more about the choice of the primaryand secondary aims, the study, overall rationale, andthe planned analyses. you need to think aboutthis in terms of who your audience might be. you only have tenpages for the concepts. but we believe that these typesof points are very important. and that she should spend sometime on these that's been
also mentioned by dr. fischand dr. bruner earlier. you need to make your casefor your specific aims, you know, you need to startout with what is known in the intervention area. how what your study woulddo to further science and impact clinical care. and then funnel this down toperhaps one to three very specific and targetedprimary aims. in general know your audience.
statistical reviewers like manyof us who will review studies we often do not have a thoroughunderstanding of the patient population, or theintervention, or the disease, or the treatmentthat's being proposed. and just like if you werewriting an r1 or an r21 or any other type of grant it's yourjob to be clear in your study background and justification sothat everyone who is reviewing the study can follow yourrationale and your study design. so in general all reviewersreally want to be convinced that
you have relevant and testablestudy aims your sample size, and power estimates are correct,it's feasible to recruit and retain your participants,and very importantly that you can collect data ina community setting. you have well thoughtout patient safety and confidentiality. your data analytic techniquesare optimal and appropriate. and you have an understanding ofthe strengths and weaknesses of the study design.
and you've outlined howthe major weaknesses will be addressed. so our major take home messageto you is number one find a statistician to assistyou, do it early, do it often. and number two go back tonumber one if you haven't [laughter] a statistician yet i'm nowgoing to turn you over to dr. judy hopkins whois going to give us some
insight and perspectivefrom the community sites. >> dr. judy hopkins:good afternoon. my name is judy hopkins. and jeff kirshner and i arethe community oncologist on the nci symptommanagement quality of life steering committee. i am in practicein kernersville, north carolina with novanthealth oncology specialists. and am the co-principalinvestigator of the south east
cancer control consortium. it is critically important tohave an experienced community oncology researcher on yourstudy team from the beginning. they can help you determine ifthe equipment or procedures necessary for your clinicaltrial are available in the community setting,they can help you avoid financial hardship for thepatient and the practice. innovative studies willoften require tailor made recruitment strategies,instructional webinars,
or face to face training ofstudy coordinators and need to be planned and budgeted for. remember most community oncologypractices are only open between 8:00 to 5:00monday through friday. studies that require minimaloffice staff and oncologist involvement with study visitsaligned with standard treatment guidelines and have shortfollow-up accrue the best that are the easiest to complete. timely protocol completionoccurs when the trial answers
important clinical questions orsolves vexing quality of life issues and grants the interestsof both the oncologist and the patient. jeff will next describeexamples of successful and challenging protocolsin the community setting. >> jeff kirshner:thank you judy. i'm the director ofresearch in syracuse, new york at the hematologyoncology associates central new york.
i'm the principalinvestigator for our ccop. and as judy mentioned i'mone of the two community representativesfor this committee. i'm going to mention two of manyprotocols that have both been successful and unsuccessful. the first one which was verysuccessful was urcc07079. a disclosure is that i wasthe study chair and judy was a co-investigator. it was designed to preventto see if we could prevent
(unintelligible) graftinduced bone pain. it was a randomized controlstudy of (unintelligible) a very frequent problem thatpatients experienced. it's clinically important. it's of great interest tooncologist, nurses, patients, and to families. it's easy to identify inrandomized patients with really minimal physician involvement. the nurses and the researchassociates do most of the work.
there was straightforwardrandomization. the treatment wasrelatively easy to do. there was relativelyshort follow-up. and we accrued over 500patients in one year. and one of the reasons thisprotocol was successful as prior speakers mentioned we hadboth community oncologists and statisticians involved fromthe get go with help from academicians' people that hadphds and knew the scientific process even better thanwe community oncologists.
unfortunately there havebeen protocols that have had to close early and nevercame to completion because it turns out they werereally not practical to do. one such example ison the next slide. this is rtog0123. and radiation induced pulmonarytoxicities are really an important clinical problem. and when this study was designedit was felt that it was doable. it was a phase 2 randomizedtrial with relatively low
numbers of patients requiredsetting (unintelligible) in patients receiving radiationwith or without chemo for limited small cell or stage onenon-small cell lung cancer. as you could see therewas a lack of perspective feasibility assessments. eligibility was more difficultthan was thought ahead of time. the patients and physiciansweren't as interested as investigator thought. but the main reason for itsfailure was there was really
a lot of work required by boththe investigators and staff. and i'm not trying to suggestthat we not do protocols because of the excessive work requiredbecause there is some very important scientific questionsthat need to be answered but perhaps investigators need tobe innovative and get outside funding or other help becausecommunity oncologist have limited time to devoteto this type of research. and anything that's reallycomplicated is going to be difficult to carry out.
so that's basically what iwas going to say of about two sample protocols. and i'm going to turn it backover to i believe diane. >> dr. michael fisch:thank you very much. this is mike fisch. i'll go ahead and take it fromhere to begin the question and answer session. of course everybody'saudio is muted. so people are not calling in.
but the way you can submitquestions is by going to that q and a box on the right-handside of your screen or maybe it's in my caseit's on the left-hand side of the screen but findthat q and a box, expand it, and type in yourquestion and then we can see the questionsand go from there. so i'm going to start with aquestion and address this to dr. bruner and that'llgive dr. bruner a chance to come off mute there.
>> dr. deborah watkins bruner:i'm here. >> dr. michael fisch: and thequestion is this is it's very frequent that we want to measurean outcome for which an ideal instrument doesn't really exist. for example a pro which has beenperhaps well validated and has an established minimum importantdifference but from which there are limited data in theprecise population that you want to study. and the comment also describesthe fact that sometimes the
timing of the studies or theexact nature of the patient population isopportunistic and driven by the disease committees. and the quality lifeinvestigators are trying to assign that pro anyadvice about that? >> dr. deborah watkins bruner:well, you know, we've done a coupleof different things. one is that some of thecooperatives have done used the pros of secondary endpointsand then have actually done
validation study within thattrial which works for larger phase 3 trials when you haveenough of the sample size to do validation. we've also looked atthis very rationally. we do have a trial opencurrently in women who are getting pelvicradiation therapy. and we're using an instrumenta patient reported instrument validated in men forbowel and bladder issues. when the symptom managementcommittee reviewed this
it was not validated in women. but we looked carefully at thefact that the symptoms mapped really well to the objectivesand we had no plausible reason to think that the question wouldhave been different in men or women for some of the bladderand bowel issues that they were looking at. so it's one of two ways. there can be concurrentvalidation if the sample size is large enough or if we canhave really rational discussions
about whether the endpointsreally map well then, you know, it may be okay. >> dr. michael fisch:thank you dr. bruner. so another question has come upin this one's for dr. kirshner. dr. kirshner can you give anexample of what caused the excess work for oncologists inthat second example that you described in the webinar? well i wasn't participatingin that study per se so i can't reallyaddress that.
but examples in other studieshave been really complicated consent forms screening patientsthat may or may not be eligible excessively and then findingout that they weren't eligible. some protocols require five toten forms to be filled out at each visit and sometimesthe visits are excessive. and as a member of thecommittee we've pointed that out to investigators. sometimes blooddraws are excessive. so there's a lot of factorsthat go into making a protocol
unsuccessful and inaddition long follow-up. long follow-up isa deterrent often. but again it's just one factor. and i don't think any ofus turned down a really clinically important studybased on any one factor. but when the factors combinedto make it not feasible sometimes we don't evenactivate the protocol. >> dr. michael fisch: thank you. >> melissa glim: so that --
>> dr. michael fisch: -- sothe next question i have is oh does somebody elsewant to comment on that? >> melissa glim: yes. i this is melissa. i just wanted to and i'mhelping to moderate this event. i just wanted to repeatthe instructions for asking questions. the q and a box you have to openit first before you can drag it to one side or the other.
it's on the top navigation bar. and it says q and a. so if you click on that you cantype your question in and when you type your question in clickon the word ask rather than the hand because that willpublish your question for us. that's helpful. so the next question ihave is for dr. naughton. so you mentioned in your part ofabout the statistics the idea of calibrating an outcome toone half standard deviation.
can you explain a bit more aboutwhat you mean here and provide an example of what calibrationmeans in this context? >> dr. michelle naughton:i may have kevin answer this question if he will. >> dr. michael fisch: sure. kevin do you want to join inand explain this notion of calibration to the halfstandard deviation change what that refers to? >> dr. kevin dodd: well mainlythe idea is that even though
it's possible to do a powercalculation using just nothing but what we call an effect size. and that's just like a fractionof the standard deviation of the outcome thatyou want to do. that i think the important thingthere is that even though that's all you need to do and althougha lot of the literature that (cohen) puts out there says thathalf a standard deviation is usually clinically meaningfuli think the important thing is that we like to see a as a(unintelligible) we like
to see what that reallyturns out to be. so if there - if possiblewe'd like to see some sort of literature review or somesort of other study that says this half an effect size ishalf a standard deviation is this many units. it's not in order to say it'shad an effect on half a standard deviation or effect size of ahalf but actually say from other studies or something else wethink that this it means this much is what - and so have amapping between, you know,
the, you know, some actualdifferences in the actual metric that you're using rather thanjust sort of an overall effect size is really i thinkthe important thing. >> dr. michael fisch: so areyou saying that it might be something like a five pointchange in this scale is roughly equivalent to a one levelchange in performance status? >> dr. kevin dodd: yes. for example something like thatwhere you actually kind of hang it on real life change in themetric as opposed to, you know,
just saying we think that wedon't know what the change is going to be. we don't know what the actualchange is in the actual metric. but we know that if there is achange that's big enough we find it in our power calculations. and that's - leaves uswanting a little more. >> dr. michael fisch: okay. >> dr. kevin dodd: do yousee the distinction there? kevin while i have you letme ask you another question.
dr. naughton's slides referredto a couple of terms that are commonly seen in protocolsand statistical reviews. one is not wordmulti-co linearity and the other is clustering. and i wonder if you could justgive a non-statisticians' brief description of what thesethings really mean and how investigators should sort ofrecognize when they need to think about these things? >> dr. kevin dodd: okay.
was it multi-co linearity thatwas the thing that you flagged or multi-oh gosh not multi. >> dr. michael fisch: multi-colinearity and clustering. >> dr. michael fisch:two different words, statistical jargon. >> dr. kevin dodd: okay, wellmulti-co linearity just means that if you have severaldifferent outcomes or things that you want to control for ifyou're going to say i'm going to look at the quality oflife but i want it to adjust
for all these different things. if your adjustment if all thethings you want to adjust for if they are all related to oneanother it's really hard to tease apart what's important. and so if you actually haveespecially if you have for example two quality of lifemeasures that are highly correlated and you put them inthe same model together it's going to be hard to figure outwhich one is driving the change in the metric the outcome.
so that's basically - it'ssomething that is common if you're going to put alot of if you're going to try to adjust for or testthe independent effects of lots of variables. if they're related to oneanother that causes some statistical problems. and so that needs tobe more fleshed out. and the clustering of responsesgenerally refers to things if you're not going to have random groups of individuals
or random individuals but instead you're going to randomize say the clinic level where there is where say everyone at a givenclinic is might be getting the same sort of treatmentthan you don't - you have to take thatinto account in your power calculations because the thingthat drives your power is not how many people you get but inthis case how many clinics you get because of the people in theclinic are all getting the exact same treatment you're reallylooking at a clinic level
observational unitinstead of a person level. and there are ways to do that. there ways to do poweranalysis for what they call cluster randomized trialsand things like that. so there's stuff outthere to do that. there is one more thing ido want to mention that is a keyword that hasbeen - we've been very, very we've been wishingwe'd see more of and that's discussion of multiplicity.
that's the when you havemultiple outcomes how to, you know, there needs to be -it would be great if there were some attention paid to the ideathat if you've got three primary outcomes or two primary outcomesyou don't power your study to test one of them atan alpha or a p level or an alpha of .05 . at the very least you need to dosomething like say okay i want to spend half - i want to spendsome of my power of my study or spend some of my alphalevel on this outcome and some
on this other outcome. so doing things like abonferroni adjustment for example with one or two thingsgoes a long way toward, you know, sort of putting ourfears at ease that you're actually trying to powerthe study to answer all of your aims at once whichcame up several times in the different presentations. so that's another important wordto kind of ask your statistician about that the one that you'vegone and gotten right you've
all got one by now right so... well thank you. that's i think that's aninteresting way of thinking about it almost like you havea budget of money and you're having to sort of discount orsort of spend your money little bit differently if you actuallyintend to equally weigh two primary outcomes. >> dr. kevin dodd: yes exactly. >> dr. michael fisch: it's agreat way of explaining it.
so here's another question. this one is for judith. so judith can you explain howyou participate in steering committee concept reviews? what role do you often play orhow do you come into play in the review process asa community oncologist? >> dr. judy hopkins: well i havethe option of reviewing the protocols primarily whichcomes about as a rotation with other members of committee.
but then i also review theprotocols with an eye toward whether or not theywould work in my center. and as jeff mentioned in histalk earlier i look at whether or not it can easily bedone in the clinic setting, is it going tointerrupt office flow, is it going to require a lotof time from my staff or can it all be done bymy clinical coordinator? i look at whether or not thepatients going to have to come back for multiple visits thatare not coordinated with
their standard of care. but the key thing that we lookat is not it is going to improve the quality of our patients'symptom management. >> dr. michael fisch: great. so another sort of logisticalquestion and i'll address this to dr. bruner. does the study team or leadinvestigators actually talk to the review committee atany point in the process? >> dr. deborah watkins bruner:yes there is a few minutes five
to ten minutes priorto the review where the study team may come on. what happens is that the reviewthe symptom management review committee reviews obviouslythe concept in advance. and at least one dayprior sends questions to the investigators thathave arisen from the review. and it's just a few questions. it's the high priority questionswhere the first questions that come up most commonly.
they are sent to the reviewto the investigator and the investigative team so theyhave at least 24 hours to prepare a response. they're asked on the phone tosymptom management committee. the symptom managementcommittee focuses on those high priority questions givethem a few minutes to answer. and then they are recused as therest of the review continues. that's very helpful. so i have another question thati think i will address to the
whole group maybe dr. bruner tostart with but maybe others will have a point of viewand this has to do with the importance emphasizedduring the webinar on discussingphysiologic mechanisms. the question says some of thesymptom management interventions may work by changing quotecognitive mechanisms how individuals say thinkabout their symptoms. is this an acceptable orappropriate content to include in the concept write up?
and do any ready of the panelhave any advice about that? >> dr. deborah watkins bruner:i'm not clear on the question. if they're saying that if theintervention is proposed not to change a symptom butto change a perception of a symptom i cannotsay that on review of the national portfoliothat that would've been a highpriority study with the cooperative groups. it may indeed be an importantand rigorous question something
that would be good for nora 1. but the national portfolio islooking at actual changes in mechanisms, decreaseof prevalence, or minimizingseverity of symptoms. >> dr. michael fisch: yesthat's a - the whole question is it's tricky. but i agree with yourresponse dr. bruner. in fact it's sort of made methink about this instruction that maybe some of us got inmedical school when we're
prescribing let's say atricyclic agent for depression back in the old days. and somebody might explain thatto the patient that when your mouth starts to get dry thenyou'll know that the drug is getting into your systemand beginning to help you, you know, the way offraming a side effect. so i can imagine, you know, somesorts of interventions that reframe or otherwise helppatients think differently about their lives or their symptoms ina way that could be impactful.
and, you know, is that amechanism and how would you demonstrate the mechanisms? what you usually see withconcepts like that are conceptual models and conceptualunderpinnings for the approach. but, you know, how that maybe reviewed and prioritized in the system i think is,you know, hard to say. i think a lot of the mechanismsof symptoms and toxicities tends to be thought about as molecularor biological mechanisms but that may or may not bethe only thing that gets
prioritized it depends. anybody else want totake a stab at that? >> female speaker: yes. i was just a littlebit confused about it. is it to look a kind of forexample the neurocognitive pathways changes in pathways oris it to look at such things as like cognitive restructuringwhich might be more of a behavioral outcome? my first thought on listeningto the question was,
were they talking aboutbehavioral change models? but it sounds like that they'realso looking for physiological changes as well thatmight occur for those. and i do recall seeing someprotocols where we've at least be looking at maybe somepredictors of behavior change or other types of physiologicalmeasures that might be related to that. but i don't recall any typesof studies that i think are addressing what theindividual was asking.
let me get to another questionand this is about statistical analyses and maybe dr. naughtonyou can address this also. >> dr. michelle naughton: okay. >> dr. michael fisch: arestatistical analysis plans required for exploratoryobjectives or endpoints? >> dr. michelle naughton: wellit was one thing that was specifically covered in thisslide set that you need to have some general analysis planthat would be put into this. and exploratory objective or anendpoint would be a secondary
outcome in the studies. and so you need much more detailfor your primary aims but not for your secondary orexploratory endpoints. but you can ignore it. you need to say give someindication of maybe what statistical techniques orand methods that you might be using to explore theaims of the objectives. and i don't know if dr. dodd hassome other comments on that? >> dr. kevin dodd: i'll justadd another couple of things.
i think the - if you go alongwith one of the earlier slides error on the side of givingmore rather than less in your concepts. and it's great to havefiscal analysis plans for secondary endpoints andfor exploratory aims. i think usually what you're -what we're looking for is say in a phase 2 study where youare trying to think about how you're going to moveforward for phase 3. there the - if the exploratoryendpoints are trying to get it,
you know, something that you'regoing to use, you should give some thought to how you'regoing to analyze it. give things like a confidenceinterval around what you think you can find so that you can sayokay if i do this with as many patients i'll have - it may notnecessarily need a formal power analysis but you can say i thinki'll be able to get my answer to this exploratoryaim or this, you know, this exploratory measure. i can get that to within thismuch this many units, you know,
within a bound of that. just to give kind of a senseof is it going to really help in going forward to thenext phase of the trial. i mean the phase 3 trials allthose things are supposed to kind of end up and say atthe end of my phase 3 trial i'm going to know everythingthere is to know about this particular setup. so i think the havingexploratory lots of exploratory aims in a phase 3 is not reallywhat a phase 3 is for but
exploratory in a phase 2 is ithink they're more acceptable. i think that's helpful. so some sort of statisticalstatements provides a sense of the quality of the exploratorylook it sounds like. >> dr. michael fisch: so here'sa sort of a follow-up questions that i'll maybe try to address. the following question was theperception of the symptom could be considered self-report suchas pain or cognitive changes would this not be of value?
and here i think that thisquestion might be getting at our discussion about mechanisms. and i don't want to confusepeople to think that a self-report, you know, patientreported outcomes are not considered valuable. but here might be an exampleof a mechanisms issue. so let's -- and i'm just goingto make something up totally out of the blue -- but let'ssay that people observe that patients who are gettingmetformin for their diabetes
or hyperglycemia and who alsohad ringing in their ears had resolution of that symptomthe tinnitus disappeared in the people who aretaking metformin. and based on that observationobservational data somebody wants to propose to actuallyprescribe metformin to people who are bothered by ringingin the ears tinnitus. so they self-report of ringingin the ears would be valid. but the committee would behighly interested to know what the proposed mechanismsmight be so that,
you know, if the trialwere negative it's not a completely valueless trial. and if there is no proposedrationale for why metformin would reduce ringing in theears then that trial might have trouble going forward. i think that might be at leasta hint about what proposed mechanisms questionsare really getting at. and dr. brunerwhat do you think? does that explanation makesense from your experience
on the committee? >> dr. deborah watkins bruner: iagree mike i don't really have anything to add to that. i think that example is good. so i have a question for jeff. so jeff how do you interact withresearch bases when it comes to designing and thinkingthrough these trials? what role do you haveas community oncologist at that level?
>> jeff kirshner:that's a good question. and i could describe my ownexperiences with alliance. can you hear me? yes anyway i can describe my ownexperiences with the alliance and several researchbases such as urcc. community oncologists areinvolved from the onset. we have members on mostof the disease committees such as symptom intervention,and prevention, and other cancercontrol committees.
there's a member from thecommunity and from the ccops on the executive committeesand on the concept review committees within the cooperative groups. at least in the alliance we havea community oncology member as a protocol co-chair ofevery study that's done. so there's input from the onsetin terms of what needs are out there and whether thestudy can be done promptly and whether there are barriersthat can be overcome. in addition some of thecommunity members actually write
the protocols or getinvolved in concepts based on their clinical experience. judy and i are examples of that. we've both recognizedsome problems that our patients were having. and developed protocols with thehelp of the research bases and carried them forward. i think that's such animportant point about the role of community oncologist, nurses,and other kinds of providers
who are involved in cancercare at different levels. they all have the opportunityto not only interact with the research bases but provideleadership on protocols, and authorship, and sometimesfirst authorship of publications, et cetera. so it really is an integratedsystem of participation. so i think it's abouttime to wrap up and we've run out of questions. one of the questions thatcommonly came through was
a question about howto access the slides. and so i'll turn it back overto diane and diane maybe you can address where peoplecan review the webinar, recommend it to others, or getaccess to the slide materials as well as otherhousekeeping issues? >> diane st. germain: severalnci websites including the division ofcancer prevention. and they'll also beposted on the isoql or the international society ofquality life research website.
it'll take -- >> dr. michael fisch: -- diane. >> diane st. germain: about amonth before they're posted. and these slides will be postedalong with the audio version of today's webinar. so if you have colleagues thatdidn't have the opportunity to listen in you can passthat information onto them. and if you'd like to havethis as reference for yourself in the futurethey'll be available.
>> dr. michael fisch:diane can you back up just for a moment because -- >> diane st. germain: sure. >> dr. michael fisch: i think iheard you sort of midstream as if you came off mute or yoursound picked up a bit late. what was the very firstportion of what you described there about the slides? >> diane st. germain:thanks mike. so the slides along with anaudio recording of today's
webinar will be available ona variety of nci web sites including nci's divisionof cancer prevention. and they will also be onthe isoqls web site the international society forquality of life research. it'll take about a month beforethose are posted and we will send out a reminder email to youall to let you know when they are indeed posted. and you can share those withother colleagues and refer to them at your convenience.
the other thing you'll bereceiving is a short survey regarding today's webinar. if you could please take a fewmoments to complete that it'll give us an opportunityto plan future webinars, get a sense of whatyou found helpful, and ideas for contentfor future webinars. so in closing i'd like tovery much think the speakers today for theirwonderful presentations. i'd also like to thankseveral people sue rossi from
ccct, yvette ortiz. also melissa glim and katherinejenkins they are from the office of communication and education. and they have worked veryhard and diligently behind the scenes to make allof this happen today. and lastly dr. wortamccaskill-stevens for her support from the divisionof cancer prevention. so i will close and thank youthe participants very much very much for joining us todayand for your participation.138001:24:44,379thank you.
No comments:
Post a Comment