>>>dr. steven clauser: okay.we'd like to reconvene and get going with the last moderateddiscussion session of the day today where you'll have anopportunity to ask questions of our panel. and we're veryfortunate to lead our moderation section is ellengritz. she's a professor and chair of the departmentof behavioral science and the olla s. striblingdistinguished chair for cancer research at theuniversity of texas, m.d. anderson center. she's anestablished leader in cancer
prevention and control research,and an internationally known investigator, and she sitson numerous advisory boards, and her ph.d. in psychology fromthe university of california at san diego. okay.ellen, take it away. >>>ellen gritz: thanks alot, steve, for that lovely introduction. so i just passedby those dancers again. i just imagine us all in oursequin gowns and the gentlemen in tuxedos. we're going to havea rousing 30 minutes here. we had five wonderfulpapers, very challenging.
we heard all about theimportance of causal modeling, statistical designs, things i'venever dreamt of, all kinds of simulations, very sophisticated.and then the issues about measurement and appropriatedesigns and rct versus others as well. so i know that there'sgoing to be lots of discussion, push back, encouragement, andfacilitation from the audience. you have five, fivepapers to draw upon. so who is my firstperson? >>>fs: my name is (inaud.) i amrepresenting table number 7.
maybe already addressedin the papers, since we're not sure wethink it's important to highlight (inaud.). we thinkthat the rates (inaud.) should be taken in to consideration(inaud.) interact at all levels. we also think that it'svery important that (inaud.) we recognize as an important toolfor (inaud.) when you look at the other side of our outcome(inaud.) mortality rates also in these populations. so forexample, at the (inaud.) level there may be some differences,but at the same (inaud.) levels
are going to (inaud.)and we think (inaud.). >>>ellen gritz: so that'sanother very important point that came up in thesepapers, that we can go below the skin so to speak,to the biological level down to the molecular and thegenetic levels. and those may need to be modeled ininteractions as well. >>>fs: (inaud.) from theuniversity of washington, and i am representing table 17(inaud.) detailed discussion about data, and how theavailability of good quality
data seems to be so key to beable to do good (inaud.). however, the collection of thiswould clearly be very, very expensive and a lot of us(inaud.) challenging. so the question that we would (inaud.)to consider is perhaps also factor in whether there maybe some other mechanisms or agencies that would be able todo these excellent quality data collections. and then forthose of us who are doing research could then have accessto. and on the related level our table was very intriguedwith the computer simulation as
well. however, for those of uswho are not engineers or have had prior experience withcomputer simulation, we thought it would be veryinteresting to get some real concrete examples and thenpotentially, even a potential how to do list so that wecould actually get started to be able to usethis important tool. >>>ellen gritz: so now we'vehad two sets of questions, do the panelists havecomments? >>>dr. martin charns: so aroundthe data question. table 20
actually had some discussionabout that also, and i guess there are two kinds ofthings. one is an example, well both are examples.one's an nci example but i left my notes on table20. what was the thing called with the wickie about the data?about the data? gem. so one of the issues is, see, iremembered some of it. do people even know of theavailability of some data and measures that they could use.and this seemed to be a vehicle for, i'll use the worddisseminating, discussing
information about data and dataavailability, so that's one example. and maybe we're luckyin the va, i see becky standing up in the back. becky's done aprimary care survey in the va and other people use the datafrom that. it's a way of characterizing our medicalcenters and primary care clinics. and similarly, we havesince 2004 administered an all employee survey, and so eachyear we get like 175,000 respondents and measureorganizational factors, organizational workgroup andmedical center concepts.
now, the good news about thatis it's a common measure, similarly for becky's. it's acommon measure across all 140 sites. the bad news about itis, you may not agree with the conceptual model that we useto decide what to measure. and because you're alwayslimited in the number of things you can measure, where you comefrom with your conceptual model may not fit the data that we'vecollected over these years. but at least those are acouple examples of data being available about some of theconcepts that we're talking
about in terms of thesemulti-level measures. >>>dr. paul cleary: just tocover, i apologize, someone else mentioned the emergence ofemr's, and i think that's going to be c change. ialso recommend to people, gary king had an article, idon't know if it was the last issue, but a recent issueof science, talking about the wealth of information thatis now becoming available. and it's not traditionalinformation, it's not the surveys, it's not emr's. butlike one example would be
that google detected theinfluenza outbreak about a month before our surveillancesystems did. so if we think creatively about a lot of theseissues, there are many, many more opportunities thanhave ever existed before. and they're challenges that garyidentifies, but it's worth thinking outside theconventional data, you know, the cms kind of data which isincredibly valuable. but the amount of information outthere is almost limitless. >>>ellen gritz: any otherpanelist comments?
okay, table two. >>>fs: (inaud.) previouscomment about the need for concrete examples. wethought that all the papers were excellent but felt thatthey would all be strengthened through some concrete examplesgiven to the different types of issues. so i just wantedto reiterate what was just expressed before. the secondpiece that we talked about, and spent a fairamount of time on, was the issue of efficiency andthe issue of looking at what
are the different measures,what are the things that we should be employing, and whatare the recommendations from the authors of the papers interms of their multiple expertise on what's doable,what's actually something that can't be done in the scope ofthe work that we're doing in terms of measurement, and whatare their recommendations. so we thought all of thepapers would be really strengthened and helped byhaving some real concrete examples about where the fieldis now and your understandings
of the field, where we shouldbe going and (inaud.), and what is yourfeedback on that? >>>ellen gritz: feedback? noone wants to comment? >>>fs: (inaud.) and i representtable 16 and we were all very intrigued by the notion oftime as the third dimension. and especially when we sawthe flowchart that looked at family support beingcritical at some point, position being critical atother points and so forth. what we want to know is how inthe world do you capture those
kinds of differences from thelogical kind of point of view. specifically, developmentallywhat do we do, how do you know when to talk to organizations,when to talk to individuals, when to talk to the physiciansand the providers and so forth. so any light you could shed onthat would be much appreciated. >>>dr. jeff alexander: myanswer is probably not going to be entirely to your liking. ithink one of the points in my paper is that we should starttreating time as an analytic variable as opposed tosomething we measure t1 to t2.
end of story. and i think thereis a lot of information, a lot of work that's alreadyout there that would help inform the sort ofquestions you're asking. so for example, in socialepidemiology there is a lot of work that's been donethat tracks growth trajectories in certain clinical outcomes,in certain behaviors among patients that would helpinform, for example, when an intervention might be applied,to whom it might be applied, and how that interventionmight be likely to kind of
change in terms of itsimpact over time. so again, not a direct answer to yourquestion, but i think that there is a body of workout there that would get us closer to answeringthat question. >>>dr. martin charms: i'dlike to just piggyback. it's a little bit of an answerto the prior question as well as this one. but oneinterpretation of time is history, which i think wealso have to think about. and all too often i thinkwhen we do studies
that look at organizationsor even people we forget about the fact that there'sa history that got them up to the point where wefinally got involved to study them. so i'll use theorganization since that's where my work is. and a very quickstory is, we're studying eight children's hospitals in thestate of ohio as they're trying to implement someevidence-based practices around patient safety. and wediscovered in two of these hospitals where the practicethat they're trying to
implement is the use ofalcohol-based skin preps, that these two hospitals weren'tgetting anywhere. and they're both in the same city, and thesurgeons practice in both hospitals. well, we wouldnever know to measure the fact that a number of years ago theyhad a fire in the operating room because of the alcoholbased skin preps that the prep pooled and ignited. so nobodythere wants to go near the stuff. and if you were to usemeasures of organizational culture or leadership or anyother variable i could name,
you're not going to find that.we didn't find that until we talked to the people aboutwhat's going on here. and so my conclusion from that is,while there are some things that we do know about theeffect of context in organizational change, thatthere's a lot of things that we don't know, and there's a lotof special causes sometimes that really mess up our work.but the way to understand and to help build our models,because i think we're at a stage that we're still buildingconceptual models as well as
testing the pieces, is to dosome qualitative work. and we found what we found throughinterviews. and we also found the strength of the feelingaround that particular issue. so in summary, importance ofhistory, importance of doing qualitative work, to beable to identify some very important things thataffect the intervention. >>>dr. brian weiner: so twotime issues that come up, jeff mentioned that i thinkwould relate well to joe's paper on simulation aresequencing and duration.
so it could very well be thatmulti-level interventions are more effective if sequencedin a particular order, such that some of them comein earlier than others. and we strongly suspectthat interventions take time to have an effect, andthen their effect decays over time. and that those can varysignificantly from one intervention to the next, oreven across different levels. this is something that weprobably need to attend to more in our research rather thanjust measuring effect sizes,
looking at duration buildup ordecay effects. but it is possible i think withsimulation models to play with time. you can expand time,dilate time, accelerate time and so forth. so it is possibleat least to test out what your various assumptions are. ifyou sequenced interventions in a particular way, do they haveto start one after the other, could you stack them, couldthey be staggered, how much should the staggering be. andif you make certain assumptions about how long it takes for theeffect to build and how long
it's going to decay, how dothose effect the various outcomes that you can get. andi think simulation offers you a nice vehicle for testing out,it's essentially doing sensitivity kinds of analysesto see what those assumptions might mean in the realworld. i don't know, joe, if you wanted toadd anything to that. >>>dr. joseph morrissey:i agree whole heartedly. i just had a couple of othercomments reflecting back on a couple of the pointsthat were raised.
in many respects ithink we're needing to build some new researchinfrastructure that doesn't exist now, and that's whywe're struggling with this i think. and so there areseveral things that need to be thought of. with regard todatabases for example, recently the agency for healthcarequality and research sort of who's tasked with one of the,the major responsibilities for comparative effectivenessresearch, sort of recognized early on that databases to docomparative effectiveness
research didn't really exist.so they had to create stimulation to the community tosort of begin developing databases, then make themavailable for the research community so that certainissues of comparative effectiveness could beaddressed. so i think a program, and trying to thinkabout ways in which grant mechanisms and other thingscould be targeted to database development and collaborationswould be a strategy. with regard to the concreteness,i agree. again, i think the
strategy there though might bemore of a workshop orientation and i think workshops could becreated as another way of sort of bringing people with thetechnical expertise as well as the kind of clinical and policyissues together to be working on things. it would be adifferent format than what we're using today, which ismore general and orientational. but i think that would clearlybe another kind of a follow up step to really demonstrate howto do this. and perhaps in our smaller workgroup sessions wecould be focusing on actually
considering particular variablesfor particular issues. the other strategy whichis also an infrastructure one is, you know, this is therole that research centers have served. and i think one of thebest examples of that going back a number of years is whatthe cancer institute did with research centers to reallyelevate community oncology research and to move itforward. research centers are sort of falling out of fancy.when harold varmus was director of nih he was verymuch looking at the r01 single,
not single investigative, butindependent investigator model and really wanting to investmoney in research centers. i recently heard, just the otherday tom insel, who is the director of nimh, talking aboutthose continuing kinds of discussions. in the currenteconomy and so forth, nih is really sort of looking in otherdirections away from research centers. but it's very hard inmy mind and in my way of thinking, to sort of getmulti-level research going when you're sort of funding r01applications. i just don't
think the mechanism aligns withthe challenge given the fact you've got to be pullingmultiple people together and multiple databases and soforth. i don't think you can do that with r01 type research. soi think there are some of these bigger infrastructure kinds ofthings that, if we're really going to make some progresshere i think we've got to have an effort that's commensuratewith the challenge. >>>ellen gritz: goodobservation. we've only got about ten minutes left, iwant to make sure that all
of you get to speak.so go ahead. >>>ms: (inaud.) representingthe combined forces of tables 10 and 1. we had an interestingdiscussion about adaptable design and (inaud.) comments onthat. i think we saw adaptive design having perhaps aspectrum all the way from a small rct in which perhaps youwere proposing and (inaud.) approach to intervention(inaud.) all the way to saying that the fact that adaptivedesign is the intervention, that if you are allowed to andexpected to use a lot of
resources and creativity toaccomplish your end. and that intervention itself isthat. that might be more applicable in implementationstudy where you're deep in to the politics of theorganization and you're using various means andmethods to try to get your intervention implementedwithin that setting. so i'm curious about definitionsand understanding of (inaud.). >>>dr. paul cleary: one of thethings that we talked about at our table in terms ofadaptive design, i think one
model of it is the ihicollaborative approach. it's very, very adaptive.each site decides on its (inaud.) and the rapidcycle improvement approach to change is inherentlyadaptive. you're supposed to just regularly, rapidlysort of change your focus. the challenge in those,and don's a big advocate of the rapid cycle improvement,and he often says our approach, the traditionalapproach to research is antithetical to qualityimprovement. it's a little bit
of a stark contrast. but thechallenge is, if it's not standardized or generalizableyou're never quite sure what worked or didn't work or how togeneralize it. it may be good for an internal process, but interms of external validity it's always a challenge to know whatit is that you would export, under what circumstancesto what places. >>>ellen gritz: okay. >>>ms: (inaud.) i representthe smart table 14. (inaud.) the generalized validity(inaud.) intervention and how
does one go about tryingto build that up. and so that made us limit tothe discussion about (inaud.) did happen to have(inaud.) looking at mli's (inaud.) criteriawhere were we would know how to assess (inaud.) should befunded and which were not. and as we talked about thateven more we wondered if there was, did any of the papersreally talk about how we got to be here? why did wethink (inaud.) what's the science behind it, what werethe assumptions that were
made in order to bring ithere today, (inaud.). >>>dr. paul cleary: one themethat i think emerged today is we emphasized the internalvalidity at the expense of external validity. i thinkspeaker after speaker said we need to, or context, however youwant to frame it. need to know about external validity. i thinkseveral speakers also opined how we got here. and one wasthat we haven't been as successful as we would haveliked with the single level interventions, and that there'sa premise that maybe by
combining either the additiveeffects or the synergistic effects will have the kind ofimpact that we would like to have. so if a patientintervention is contingent on changing community norms andchanges in the care setting, then the individual interventionis going to have zero net sustainable impact if we don'toperate in the context and the different levels. now that'suntested, but a lot of smart people think that's a potentialway to get more traction on some of the issues we wantto get more traction on.
>>>ellen gritz: or we realizethat what we implemented did not happen because of theimplication of what we thought the provider was goingto do. something of that sort. >>>russ glascow: russ glascowfrom the very engaged table 8, with a few of my own editorialcomments put in. just two quick points. one about simulationmodeling. just that we are very bullish on that. probablylargely at least because only one or two of us at the tablereally understand it. just know enough to be dangerous. butwe're quite excited about it
for the following reasons. one,we think it has the potential to increase transparency ofreporting and making explicit assumptions to things that aregoing in there, which we think would be good for all types ofresearch. particularly around how this, probably an incorrectterm, but what i would call the impact of starting assumptionsor sensitivity analysis. in economic speak, i'mquiet not sure what the modeling term for that is.secondly, the potential for replication which we see toooften, and i think is really
underemphasized in all types ofscience, but you show through cross validation or other typesof replication. and finally something that we didn't heartoday, might be in the paper but that we felt was reallyimportant about it is the potential to identify,particular when you look for interactions, (inaud.) effects.and to, in advance, predict some things which i think wecould have well known about even simple things that areconceptualized in one or two level like the accord trialresults or things like that,
if we had really modeled thosebefore based on what we know. second point, we were also biton the time bandwagon. we feel it's really been the rodneydangerfield of research. it deserves much more credit,and we don't know quite why. back to (inaud.) before some ofthe people in this room were born, about the importance oftime as one of the key concepts of generalizability. but inparticular, i think the notion that today it's more and morepossible, and tomorrow and next year will be even morepossible. mark made the great
point about history, but wehave a lot of discussion about all of these different levels,really a need to do that over time. particularly thinkingabout organizations because organization day one when youstart your study is not the same organization that you havesix months or a year later. and that's true for most of theother levels there. but we usually don't do that, we justdefault to whatever, what we call time one was, despite thehistory and everything else. and the final point is justthat i think it particularly
imperative, and probably one ofthe biggest challenges of time, it's the most neglected is,of course, sustainability. the long term impacts. and that,we could have a whole meeting, and probably will, on that.but one thing that i think is really possible that wasn't inthe past, with all the work being done, trying to harmonizecommunity level health indicators that could be used toat least partially characterize contents. that is being donenow, we could maybe have another discussion about thatsometime. but these geospatial
databases and things that wenow have to summarize some of the important social, physical,environmental characteristics. >>>ellen gritz: thank you.i'm going to let our last person at the microphonespeak, and then any summary commentsfrom the table. >>>fs: thanks for theopportunity. (inaud.) university of north carolina,and we've been talking a lot about context today, and we'rehitting on timing and history. but one thing i'm not hearing,something we've been working on
is readiness. and we've justcompleted a model of partnership residence whichwe defined three dimensions. one being (inaud.) that involveshistory, time, interest, benefits, whether the benefitsare mutual, and value. effective (inaud.) to do thework, resources, people, finances, (inaud.). and thethird dimension is operations. communication systems, (inaud.)and leadership. and so i think in the application ofmulti-level intervention, the readiness of the organization,the provider practices,
and of course the patient, wehave (inaud.) and there are some readiness problemswith organizations. but i think we really needto look at readiness as a multi-level (inaud.), andi really haven't heard that today. i think it'sbigger than time and history. >>>ellen gritz:very interesting. okay, final commentsfrom the panel. >>>dr. martin charns: i justcan't forget the comment that ernie made this morningciting machiavelli.
and as i think aboutreadiness. >>>ellen gritz: jeff, didyou want to say something? >>>dr. jeff alexander: just toreflect some of the comments that were made in table 20,since they didn't get a full voice to their concerns. oneof the things we talked about quite extensively and i became aclear convert as the discussion proceeded is, how we engage incumulative knowledge ability in this area. these studies arelikely to be very expensive and we can't afford, i think, in anera of resource scarcity to
engage in what amounts to aseries of kind of idiosyncratic unconnected studies. it seemsto me we can approach this from several vantage points. one isfiguring out how to take more of an incremental approach tomulti-level interventions and evaluations without trying toembrace all of the moving parts simultaneously. but the secondis more of a top down strategy, and i may be speaking about ncihere, is to figure out how we can categorize, accumulate,disseminate the information that we're learning so that thelearning is accelerated more
than it has been in the past.so i see a two pronged approach that would be greatif we could implement. >>>dr. brian weiner: in additionto the infrastructure that needs to be built, i thinkthere's a lot of pre-work that needs to be done. i think beforethe nci goes off and spends several million dollars andpeople's careers doing multi-level interventions, someof which has been talked about already, there's a lot of workthat needs to be done on measurement. there's no point indoing multi-level interventions
if we can't measure with somedegree of confidence the things that we think are important. weneed a better understanding of causal mechanisms of theprocesses we're interested in studying and intervening upon.so i think the need for more multi-level and cross levelkinds of research is there. and i think there's a trainingaspect to it as well that issues around simulationmodeling and these other techniques. i think we justneed to build up our skill set. and there's a fair amount ofconceptual work that i think
needs to be done. so i wouldsay we're not ready for prime time actually on multi-levelmodeling. but i think there are some very concrete stepsthat could be taken to lay the groundwork sothat if and when we're actually ready to makesome big bets with the budgets that are availableand so forth, that we're more likely to succeed. idon't think we're ready yet.
No comments:
Post a Comment