my name is dr. laura lee johnson, i'm the associate director of the division of biometrics 3 at the united states food and drug administration. also one of the codirectors for ippcr. so i want to welcome you for
taking this case. i'm going to give several biostatistics clinical design lectures. and i also want to warn you there are over 6,000 people in the course. for those in front of me you may say wow, that does not look like
how many people are sitting in this room. but because we have 6,000 people i don't take questions during the lecture. unless i just said something really stupid in which case wave me down. i probably accidentally misspoke
t recent i don't take questions during the lecture, you all have to stop, get to the mics of people trying to watch this, get a lot of dead air. sometimes they can't hear. unless you really think you're going to die and not be able to understand something, put all
the questions online. this is what i promise you which is i will log in online and we will have a really robust discussion on those discussion bothers. that's where -- boards. that's where i want the questions to come.
not all the different lecturers will say the same thing but it is something to think about as you're sitting through the lecters, that a lot of people are watching us in the middle of nowhere. this is our disclaimer, courtty of my working at the fda.
they hire me, they pay me, but they only want to take account of anything that they decide they like and disown anything that they don't like. for those of you that are in front of me, i'm going to post all these questions on the discussion board online.
how many of you all have taken a class or have a degree in biostat, epi, research design? anybody in here. >> a few people. how many of you have used logistic regression in the last ten years? a few more hands.
how many have you have all have been involved in designing some type of a public health research project outside a classroom environment? okay. some of you. how many have you actually actively been involved in
running a clinical study or an animal study? laboratory study of some sort? how many have you have done data analysis outside the classroom some of the hands keep coming. how many of you have actively written part of a clinical research protocol?
good. how many of you have actually had to read a clinical research protocol? and how many have you read a clinical journal article? that would be the vast majority of folks. we're going to post these
questions online tonight, tomorrow morning, and i want everybody who can to actually give an answer to that, so that way we'll also get to know each other, who is involved in the class. because part off what's really important here is there are a
lot of different people who take this course. from all around the world. and for some it's the first introduction, for others, we have a pretty advanced understanding of clinical research. so part of what i aim for in my
lectures is to give some of the treks and the tips and the concepts that i have learned over the years, from the various investigators i've worked with. i'm going to get hitting these mikerishs, i'm sorry -- mics, i'm sorry. we have a new setup this year.
i talk a lot with my hands. even though i try to stay steady for the camera. my general objectives here, i want you to be better consumers as a medical scientific literature. because while this course is focused on principles and
practice of clinical research, this presidentials and practices-- principals and practices are true for non clinical research too. there is a huge push right now, also from the nih and many other groups. you'll see many journals that have written editorials saying
we're no longer going to accept really bad research just because its preclinical. and all of these study designs, many of them, they have laboratory components. not just because -- my biggest randomization problem happened in somebody who did not
randomize on her 96 well plate. it caused a huge problem for her study and her interpretation. so i want you to be better consumers. i want you to be better users. and i want to enhance the conversation inside research team.
and it might be with your study statisticians and epidemiologies but also will be across and between a lot of different folks. realistically, what we really want is better science. with the information you learn, you're not going to be able to
do your own statistical analyses, like some of you already can do your statistical analyses based when you're raising your hands. if you want to learn how to do stats, you want to learn how to be a data manager, breach you need to go take -- you need to
take direct course work in how to do that. but this age you to improve your abilities to critically evaluate grant applications, protocols, and the literature. no one is an expert in any given area. we do all try to combine our
expertise. in this world everything is a team science work. because it's really easy to write that you're studies is going to use a randomized double blind controlled parallel arm design and intent to treat analysis.
easy to say subjects and participants will be consented. what you already heard from the first two lectures and you'll learn throughout the entire course, it's really not easy to do it. it's not easy to implement and maintain the integrity of your
randomization. and pamela and other folks have written in the chapter on a lot of the threats to integrity that show up. very well meaning people trying to do good studies, and how they were undermined. it's hard to maintain blinding
and masking. one of the tricks one of my investigators taught me, she actually made badges for her study staff and they had blinded with a little person with a little mask almost like a raccoon looking thing. and then unblinded.
and all of her staff wore these badges. because she was doing a study where it was a physical intervention. people knew that they were doing yoga, or they weren't doing yoga. but she didn't want them to talk
about it to the people who were actually taking them through all of their study measurements. that was an innovative way to try to protect the blinding. it helped her participants remember who they couldn't talk to, but then who they needed to complain to.
multiple study arms. how do you make sure your study arms aren't bleeding into each other. data collection. how do you actually make sure that you standerize your collection process. we'll talk about that throughout
the several lectures. also how do you transfer data to regulatory and other groups. it might be that you are studies don't fall issued fda but they might. there is a lot of regulatory organizations around the world. a lot of tiff rules that you
have to follow. but many times different people want your data and even if none of the regulatory groups do, this is a time and world of data sharing and how do you adequately share that data and make sure it's useful to other people.
there are a lot of data standards you'll hear about later in the winter and those will be useful for you, too. but that's the long view. tonight i'll talk tie about identify study zones, epa, public health research. we'll cover most of the
epidemiology monday. i'm talk about masking, blinding different interventions and comparison groups. if you want to know what chapter this is covered under, it's chapters 19 and 29. now, you'll notice in my outline, there is something
after the conclusion. so we have about 20ish slides, not quite, unconfounding and effect mod. i don't know that we'll cover it but i put it in this slide deck. we'll cover it between now and december. keep ahold of them, probably
cover this week or next week. all right. cervical cancer. when i was at the university of washington, many years ago, a doctor came up to me. she was actually from peru. and at that time the government had given money and said what
was one of the national problems they they had in healthcare? it was cervical cancer, one of their number one killers, women that were of a active reproductive age. they said oh, you know how to look for cervical cancer. do you pap smears.
we can give money, we're going to do pap smears. the decision was to do pap smears on every woman in peru every 2 to 3 years. peru is very mountainous. there are plenty of large cities. they said we know how to work
our public health infrastructure for this. but how do we get to all of the indigenous villages, to all these far away places? they developed a plan. this actually while it was happening in peru, similar issues were coming up in india
and several other countries around the globe. they said we will have teams and they will go to all these remote places. and the goal was basically they were going to go in and in several days, they would screen everybody and if they saw
dysplasia they were going to do treatment on site. basically -- and for of these, it was like you were hiking in, like there is not -- you can't drive a little mobile someplace to to it. so then they found out they have a plan to access physically but
in many of these areas, the men said my wife does not exist below the waist. nobody is looking at her. my daughters, you are not looking at them. so they had problems if they had any males that were supposed to be pyriforming the pap smears.
performing. they said fine, we'll get women to do them. by you still had to overcome the cultural part of whether -- even if you had a female who is going to be performing the procedure, were they going to be allowed to perform the
procedure? again, a huge cultural element had to be dealt with ensome of these groups. so they said fine. we think we know how to do this. we're going to be able to go to every woman, every two to three years.
no big deal. if she's menstruating, you're not supposed to do a pap smear when someone in menstruating. cervical cancer is slow. well, the problem was they found out next time they get to this remote location, the woman who was menstruating last time is
menstruating again. so now i have an even longer period that i'm not screening this person. so the doctor asked us, she said can i clean the cervix in vinegar, and then do the pap smear? well, here is the problem.
so pap smear, what we call a diagnostic bio marker. what she needed was something that has a high negative predictive value. what does that mean? if i test negative there is a really, really good chance i don't have cervical cancer.
i needed to be lost cost. i'm trying to screen every woman within a very large age range in a country. and i need it to be fast because, again, i'm treating on site and then moving to the next village. i know a lot of information
about that pap smear. i also need to do it in places where i may not have refrigerator, electricity, all these issues. but do i have -- can i transfer all that information about that pap smear? if i just clean the cervix with
vinegar? so what we realized, we had to run a trial to figure that out. that's not the type of trial you're normally thinking about. but these are the deeps of issues and toward the very end of the course we'll talk about dissemination, implementation.
these are the types of issues that come up. and it's also not really worded like a statistical hypothesis test. this was a story, this was my being stopped in the hallway being asked a question. and this, however, is where most
research starts. you have to come up with the question and a lot of times it's a situation to begin with. and there are are a lot of other examples like. this you can think of the i have a general story, i'll also admit every time i walk into this building
my nose starts to run. i apologize. cardiovascular disease. like we have these strange questions about weight and hypertension but we have to dig down to what is the real question society needs us to answer.
when you're thinking about sometimes the mode of administration or data collection, in mobile health we say oh, i can get all sorts of information from their facebook accounts, i can have them use an app and plug in information. i can track their gps.
there is a lot of information we can get but does it actually answer the question we need with the integrity that we need and the population that we care about. there are lots of these weird tests we sometimes have to do in order to actually accomplish a
larger research question. and to attack a laner public health or medical problem. so what is your question of interest? are you trying to interpret work in some new population? are you trying to make a decision about an individual
case? how many people in the room are actively clinically doing work? a handful, okay. so what we find out comparing two groups of people or multiple groups of people may be very different. when you need 250 make decisions
about an individual patient in front of you, for each of us i talk to your dad. what drug is he going to use. this doctor talks about the pluses and minuses of these therapies. what, in the end, is the decision and how does it work
for him? we're looking at changes a population. diabetes management, a large portion is trying to shift the curve of a population. sometimes classily we look at those differences of groups in a study.
but sometimes we're trying to do biomarker development. we have 250 figure out what type of biomarker. people love bio markers. they forget to give out what the bio marker is for sometimes. are you trying to develop a new outcome?
part of my job is a patient reported outcome liaison. i held people develop endpoints that involve the patient voice. the level of evidence. what did that we're trying to establish? are we figuring out what the current level of evidence is?
you're going to hear about met eye analyses and other types of secondary data review. is that what we're trying to do instead? regardless of what you're doing, always remember the analysis follows the design. your question will always come
first. we may have to edit it, because it may not be directly answerable. but your question comes first. if at the end of the day their answering something that does not address your question you need to say hold up, people
designing the study and analyzing the data. that is not what we need to do. because your question is going to drive the hypotheses, we're going to design that experimental design for your in order to make sure we can test the hypothesis, we're going
to do all our sampling, all that data collection in line with the experimental design. your data comes from the samples, we analyze the data, we draw conclusions, that generally leads to more questions and we start over. sometimes i look at data
analyses and i read papers and i'm look who cares, you answered a question that was answerable but it wasn't the question of interest. so all we -- the reason statisticians have jobs, not just because we like to analysis date, but to say you have a new,
cool question and we don't have a method to do it. so we need to develop the methodology to answer the pertinent question. the other problem, though, you need to take all your design information to a statistician early and often.
because part of our job is to give guidance about some of the assumptions for the method and to try to help make sure that we're going to do the best job possible with the fewest subjects possible to answer a question. because, of course, you ask a
statistician how they research a research study. they say everything impacts the statistical analysis. i will also say that's not just my job security, that's because sometimes at the end of the day investigators bring the information to me and i say,
well, we've undermined the integrity of the study by making the following decision. so i can't help you. you collected all that data. and it's basically worthless. you don't want that to happen. it's not good for you, not good for your study team and not good
for the human beings that agreed to participate in your study. so we're going to go through a little bit of vocabulary. none of you want that to happen. you don't want it to happen because you're here late at night listening to me talk. so when i go through vocabulary,
part of this is to get us on similar footing. we will talk about arms. i do not mean my appendage. in clinical research we talk about study arms or samples or we use these words fairly interchangeably. a lot of times we talk about
wanting to demonstrate superiority. john powers talked about this a little bit last night. we want to demonstrate superiority, you're talking about detecting a difference between groups or between treatments or study arms.
the idea is that there is a difference in some way, shape or form. sometimes we say we want to demonstrate that the different arms are equally or similarly effective. so that is an equivalence trial. sometimes we want to demonstrate
things are non inferior. you have to be careful. sometimes they step away so i have one non inferiority study. now i have a new compound, i need to show it's not inferior. the first thing you showed, i have do make sure it's not inferior to group one.
right? so non inferiority. you can also think of this while it's not exactly the same, kind of like generics. when you think about equivness, i sometimes think about generic. i cough about the same amount plus or mines, that's equivalent
non inferiority is it may be a little bit worse. not enough that it matters. figuring out the margin, big, big difficult problem. also, i'm a very bad lady. interact and use opatient verses participant verses subject. truth be told what you're
supposed to do is say a study subject. that helps differentiate when you're in clinical research you're a guinea pig. i sign up for clinical trials, i recommend anybody who works in clinical research, sign up for trials.
you should understand what it means to have your data possibly out there and breached. you should understand what a pain in the neck it is to fill out all these forms. understand what the burden is. but you also sometimes in literature, sometimes more in
the behavioral social sciences ra talking about participants. you want people to feel like they're participating in research, actively engaged. i do a lot of my work with patient medical records. so they are literally patients that we are working with.
but because of that, i have a tendency to flip in between all these three words. what i should always be saying is participant and subject. so shame on me, don't make my mistake. a little bit about study design taxonomy.
break the world into interventional and observational studies. interventional means i do something to you. observational means i watch the film of your life. or the photograph, as the case may be.
we also break the world into longitudinal verses cross sectional. longitudinal is the film. when i look at you as baseline 6, 7, 8 months, cross sectional is i give you all one survey right now and we're done. like a stem cells.
we ask something once, walk away opost expectative verses retrospective. i'm going to follow you into the future. take my data real time. retrospective means i look back in the past. so i may be looking back at
employee health records that were gathered by department of defense or an army. so retrospective is looking back at data that was already collected. may or may not have been collected the way you wanted it or the data you wanted but
you're using what's already there. prospective, you're moving forward and collecting data. don't get too hung up on those. but it is a little bit important. it's like if you were prospectively collecting data
and storing it and you go back and that would your stored data, still a prospective study. blinded, masked, so this is when the investigator, the people running the study, the study participants, do not know what intervention they're on. sometimes we actually mask them
instead to the hypothesis that we're testing. sometimes we do not blind our mask a study and it's called an open label study. for some of the interventional but depending on who all is blinded or not, single blind, double blinded, unblinded.
i used to do some work for the national eye institute. they do not blinded, they prefer masks. you can understand why. depending where you train you will see different names for each. randomized or non randomized.
we'll have a local lecture on paul gives a great lecture on this. the idea, how am i allocating subjects? is there a random element to it? or can i figure out who is going to be going into which treatment group?
but remember kind of this basic, that first bullet. kind of two types of research. observational verses this kind of experimental inhibitventional. so the observational, my goal is to observe and collect data on characteristics of interest
without influencing the participant, the environment or disease course. i literally observe. i do not want to intervene in any way. i want to see natural. experimental is when you're -- the researcher are deliberating
influencing the course of events or at least you're hoping to, and investigating the effect of the intervention on some carefully selected population of subjects. say observational is initially carefully selected set of when we do experimental studies
on humans, we call them clinical trials or clinical studies. similarly, though, a lot of this work, all of this applies to animals. it applies to a lot of different projects. so we're going to cover observational studies in detail
next week. but the general idea here is that you may have case reports which is literature the doctor writing down a set of information, like something looks weird. i'm going to write it up structured to share it with
other folks. several case reports make a case series. this fundamental epidemiology 101. also because a pharmacist noticed something looks odd, started working on a set of case series and case reports.
that we discovered, a. so you used to get cdc now published it electronically but you had morbidity, mortality weekly reports. when i was in school every friday we went to read this report to see what looked new and weird around the country.
what we should have our eyes open for. those are usually case series. you still see them published today in a lot of journals. cross sectional or prevalence surveys. this is a snapshot picture. this might be the national
health interview survey in the united states. case control studies. we'll talk about this. usually, you get a series of disease cases and try to find some matched controls and figure out what's different between them.
you have a really rare disease, this is a very useful type of study to do, try to figure out a list of reasons that you might have disease. cohort diseased that are longitudinal. -- studies that are longitudinal. a lot of times when we had major
disasters, we will follow the healthcare workers or the people that are cleaning up the disaster sites long term to see if they have psychological issues, if they have respiratory related issues. other problems that come up. natural history diseased.
you may have -- history studies. you may have a group of patients, and you'll follow them, see how they age, how their cities progresses. the nih at the clinical center does quite a few of these. then the he can logical studies. this is data on a population
rather than individual level. like i said we'll talk more about these next week. then we have these kind of -- some groups call them quaz siexperimental studies, these are single arm non randomized interventional studies. dr. gallin talked about several
of these when you think about his historical lecture. you don't have a control group. they tend to be early in the investigation. sometimes you may have a concurrent control group so i may decide i'm going to bathe one side of the hole in my
hospital but not bathe the other side of the hall. you can do some weird interesting things but i'm not randomly choosing it. i just kind of allocate it. then you sometimes have things called historically controlled paediatric oncology we used to
do these where patients that -- we basically only had enough patients. everybody on the therapy. so we said, well, we'll use old patient information as kind of our control group. so that early intervention based research spectrum.
i talk about epi. sometimes it's interventional, sometimes not. but that quasi experimental, preclinical studies, phase 0, those are early studies. that all of this is setting the foundation for trying to do what's a phase 1 decide or those
dose finding studies many times. in this patient population, what's tolerable? and what might be systems we also look to see early efficacy or at least some change that says we might think we're -- we have efficacy down the line. these early and late phase 2
studies, we're looking a lot at safety like we are in phase 1 but starting to get a better idea of the doze, a bet idea of how to deliver a medication. or some type of medical product or therapy. we're trying to get an idea of who should be in these studies,
and not. phase 3 or what we typically call pivotal trials. these are your major large efficacy studies. phase 4 for me in the fda world is post market. so we've kind of decided there is efficacy, but if i put this
out in the general population, do i style see safety and effectiveness? you also get into these dissemination and implementation studies, great. you think that if you make this change your hospital process that you will improve -- let's
say it's rate of some type of hospital acquired infection. you've done this at your very rigorous focus hospital. is that going to work in the middle of nowhere hospital? or a busy public hospital? dissemination implementation is can i take all of the
information about how to deliver a therapy and how delivering intervention, and actually do it everywhere in the real world? you also, then, see comparative or cost effectiveness studies. so there is a large study done many years ago by the national institute of mental health, they
took several different therapies where people had major depressive disorder. we're putting them head to head. that's the comparative study. but your ideal study is a problem that comes up, we have these ideals, right? whenever anybody looks a the
your study design, they're going to say i expect to have a treatment and control arm. what about all those studies on control arms? they expect you to have a parallel groups that you're going to have randomized people to different two arms of a study
and you're going to watch these folks simultaneously. well, sometimes that's not feasible. they expect you to look for drug a is better than drug b. well, main drug a costs $30,000 a year. drug b costs 30 cents.
i may be -- maybe it's more accessible or maybe there are a lot fewer side effects with drug b than a. prospective. they expect you to be following people into the future. only 34 people in the world with your disease.
you may not be able to follow them all prospectively in a randomized parallel arm study. they expect to be double blinded and masked. well, what if you're doing surgery versus non surgical intervention? we used to blind those studies
but you may not be able to blind although sometimes you may say what am i trying to look at? do i want to control for all the risks of opening somebody up in the extra nevertheless they may get from opening and closing them or do i not? what is your exact question you
want to answer? if you're looking at a pill verses iv in a paediatric population, you're probably not going to be able to give a fake iv. and they expect a randomized a lot of studies can be randomized if you vent inventive
and you're working in the right not all studies can be randomized. if i want to look at long term anti-retroviral therapy in hiv patients, it will be really hard to run a randomized trial. so we have these gold standards. sometimes we have to explain why
we need to be a little bit [indiscernible]. now, the next two slides are a handful of studies from bmj back in 2013. it was actually easy do lift the information. you'll see actually i give a lot of examples for bmj.
that's because you can access it publicly, open for anybody in the world. what i show you is something i want you all to be able to access. if there are articles that are not publicly available, we will put them up as part of the
course information along with my slides. in bmj, they had four articles in the research section. this one week. non invasive verses invasive respiratory support, a systematic review meta-analysis. another one, a multiple center
randomized control trial blinded. researchers were blinded. you had a population based cohort study and a large scale survey. a lot of different research only one of those projects was randomized, double blind
controlled trial. there is a lot of different types of research you can do that's meaningful. but as you're doing it, you still have to distinquish the observational studies from the randomized studies. a lot of times we start doing
these analyses of observational thinking that they were a controlled randomized trial. but that tacit assumption of randomness makes a lot of other assumptions work in statistics land. so you really have to do a lot of extra work when you're
analyzing observational trial data. the idea is that in a non randomized study you can only show association. you're never going to know all possible confounders. in a randomized study, you can show association and causation.
now, in a well done non adaptive randomization, we'll get to that in a few weeks, the unknown confounders should not create problems. if you're doing an adaptive trial, unknown confounders can cause a rot of problems. -- lot of problems.
in non adaptive studies, non adaptive randomizations, the general idea is unknown remember that your questions are going to come first. so as you make all these changes, all these things that you're thinking about with your patients, what's going to work,
are you still answering the fundamental question of so for this lecture, the remainder of this lecture we're going to focus on intervention on the 26th of october we'll talk about not intervention studies around epidemiology. what are types of randomized
diseased? parallel group is a classic i mentioned that already a few times. we have a sequential trial, a list of others i'll go through all of these. so in a parallel group design, the idea is i'm going to
randomize patients to one of x treatments. one of two, one of four treatments however many. i'll look for a response. i'm going to measure them at the end of the study and just compare how is everybody at the end of one year, i might look at
a change or a percent change. from baseline. so how did they change between baseline and one year? i may look at repeated measures, maybe i actually take their blood pressure every four weeks. i'm going to look at the change in systolic blood pressure over
time looking at a curve of called repeated americaners. or look at a function of multiple measures. if you think, body mass index, bmi, it is a function of your height and weight. so there are a lot of variations on parallel group designs.
we sometimes but not always do dose titration with multiple decide arms. this is becoming more popular. especially if it's not a first in human product. the idea is you want to titrate to the maximum tolerated dose within a given subject.
dose escalation studies. with a control arm that you're simultaneously randomizing to. people underestimate the importance of controls in really it used to be the old way is you only put everybody on but especially when you have subjects that a lot of bad
things might happen to them, you may say one of my subjects died. if your subjects have a 50% mortality rate, kind of hard to tell, was it the treatment or the disease that caused them to die? if you have a control arm, even in those very early studies you
can start to tease out what are the differences in the adverse volunteers is this what are the differences in the death rates, et cetera? some of you are making ugly faces. to be honest with you, this is real life in clinical research.
if you do interventional research there is a very good chance you will kill people. not that you mean to. but you may cause harm. if we knew the answer, if we knew that people were or were not going to be harmed, or knew that something worked we didn't
need to do the research. so this is something that you have to kind of in your gut make a decision you're willing to do or not. if you have thoughts about that, steven straws' chapter in the book is a good one to read. dr. straws died several years
ago. he talks about a personal journey he had with one of the studies he did. they found out very late in the process, that they were causing harm. and some of the people that worked on those study decided
they had to leave research. leave medicine completely, because it was something that they couldn't handle. it's not an easy thing but it's something to consider. now, back to the design. i mean the whole goal of this course is try to make it that
you hopefully figure this out before you ever touch a human being, right? you do not want to cause harm. not all dose escalation and dose titration studies are some of them, more are not. in sequel trials, sequential trials happen more in
engineering. if you're doing device manufacturing, you may also do you don't necessarily have a fixed sample size or period that you're running the study. this, of course, makes it scary. irbs go what? those are your institutional
review boards or groups that approve human subjects research. the idea of the sequential trial is that it ends when one treatment shows clear superiority or unlikely any important difference is going to be seen. computers, capacitiers, et
cetera. very special statistical design methods are needed when you do these trials. one that you do see commonly inclinical research is group sequential trials. here is what we're going to talk about, type one error can be
computened. you can't do that easily in the straight up trials. these are very popular, because thee group sequential trials you analyze your data after certainly proportion of the results or information from the trial is available.
there is early stopping, depending on how you set this up. if one treatment arm is clearly superior, fit looks like there is futility -- if you got to the end have trial you're still not going to have significant results, might as well stop now.
you may also still stop for adverse volunteers. so all trials should be monitored to see if they need to be stopped. we'll talk about that in the second part of the course. this takes a lot of really careful planning and statistical
design work. it will impact your sample size. so you have to role the fact that you're going to be analyzing your data before the study is done into the planning of the study initially. but this is an example that we had from a trial from niaid, a
very old example. that was one of the first studies done in pregnant women. at the first in the trim analysis where oh oh interim analysis, the data safetymantering board saw this picture. we're going to back to this in
the survival analysis lecture. this was a randomized trial. they randomized the mothers to take the drug or placebo. and then they looked at the probability of transmission of hiv to the infants. and you can see how the study arms worked out. this is a
kaplan meier curve on the screen. and then there is a p value associated with it. but a lot of work went into trying to decide what should the interventions be? what should the population be? we -- this clinical trial group
protocol 076 was looking at safety and efficacy and preventing transportation mission of hiv from infected, not necessarily advanced women, to their babies. now i have to figure out, i have a different population. not necessarily advanced.
got to worry about not only mom, got to worry about infant. because maybe they don't get hiv but they have some other horrible problem that happens to you've got to think about what are the ramifications? of giving a drug to somebody, especially if it's a pregnant
woman or if it's a male who might impregnate somebody. that's a little trick they forget to tell you about. every one thinks about lactation and pregnancy. they forget the guy. so preventing hiv transmission, they had to power this study to
detect a 33% reduction in the transmission rate. the placebo rate, normal natural history, i should say, rate of transmission was 30 percents. they wanted to drop it to 20 perts. so this study they planned was going to accrue over 5 years,
they expected 15% droop out. some of the folks would not be able to follow the entire time for variety of reasons. infants die for a lot of reasons. sometimes moms and infant. they go somewhere else. you can't track them.
you have to think about all this when you're trying to design your trial. also, hiv testing, not all that great. so they had to figure out what was in fact a positive test. in order to decide that they had an event.
another type of study is a cross over decide. in a cross over study, let's say i'm going to have a two period cross over. each patient acts as their own control. trick here is that you need to eliminate cross over effects.
so if i'm going to have you all use an asthma inhaler that, then, changes your long structure in some way, changes yourself, i probably am not going to wash that out. at least not for a while. i teach you meditation. i can't -- i can't unteach it
from you. some things you cannot do in a cross over trial. but the idea is that a lot of things you can't. so the women's alcohol study, we did a 3, 8 week dietary period. each woman was randomized do the order in which they took in
different doses of alcohol. 30 grahams of alcohol a day, about 2 drinks, 15 grahams a day, one triangle. 0-grams. they got an alcohol free beverage. basically, they got orange juice and ever clear.
the order of the assignment of the three alcohol levels was random. so that is the part that got each woman got each of these three doses. why? we each have a different set of hormeans, different
cardiovascular risk factors. and we're trying to look at a lot of cancer risks. it's better do it inside a person and their own do the, all their own stuff that they are bringing outside of the alcohol. we had washout periods. because it was such a long study
we varied the wash out period. we had one group of people, people packing the lunch at usda and dinner. every night they had a snack. that included this beverage. and they were told take it at the end, before you go to sleep, do not drive, blah, blah, blah.
but you have differing washout periods. and sometimes we have the same washout period. but at least with alcohol we knew how long it took to get it out of the system. and this was a double blind the investigators who were craig
their blood and checking their blood pressure 3 times a week, they did not know what this person was on. the women did not know what they were on. some of them said they thought they knew they were getting drunk at night.
and this was actually in a washington post article on the and so after the study was over, the pi said -- looked back to see if she had been taking alcohol or not. turns out she was in the placebo part of it at this time in dime. that's another trick i learned.
people who think they're having huge vied effects, they know they think what study arm they're on. a lot of times you don't. but anyway, same goes for the clippings trying to guess what study arm somebody is on. so then you have these factorial
designs. factorial designs, each level of the factor or treatment or occurs with every other factor. so this was a study that my nci colleagues worked on. where they randomized people. you either got salenium placebo, and another mr. obama or a
combination. so what you'll notice is this bottom box. celecoxib only. this is selenium only. in the bottom corner they're getting both. how does that work? well, it works if you done think
that selenium and celecoxib interact, particularly in respect to youro. outcome. when you do the analysis, i compare everyone in the selenium placebo arm to everybody getting selenium real. ignoring what celecoxib they
got. and i compared all the celecoxib in the placebo celecoxib to celecoxib real, ignoring the selenium. problem is, a lot of times i do these, and then the investigators come back and say can you tell me if there was an
interaction? if you care about that interaction, if you expect it mightpist, you need 250 do a 4 arm study. not -- so you can design -- it can look like this. but when you power the study, you cannot power the study
assuming that these two interventions are independent of each other. so you've got to make a decision. 2, 2 arm studies or 4 arm study. the ms financial planner study also used a -- ms flash study -- there are a lot of things i
don't like about the ms nash study, but hear it's not evenly they said we're not going to compare yoga to aerobic exercise. they were comparing each to usual activity. we're not going to compare these three arms but they have an
unequal randomization between the study groups to achieve the statistical power they needed. then you run into things like incomplete, partial, or fracshunal trials. depending where train it's labeled one of these three. nutritional intervention trial
is an example. they had 4 different types of micronutrients they were looking at. what they did, they didn't want to look at all possible interactions. this study in tend, i want to say, had many like 30 though, 20
plus thousand people in it. you will see groups that will do this, chooses certain combinations to look at and you have do make sure that you have the ones in there that you need in order to do the analysis you care about. the problem is, you know, in the
end people want you to look at certain interactions that you don't have in, there certain combinations you don't have. you do have to think pretty hard and advanced about what you want to leave out. i'm going to spend several minutes on adaptive designs.
they're gaining a lot of popularity. maybe you have 2-8 different arms. sometimes there is range dose ranging, sometimes not. people think you have -- and you'll see it in these clinical journals.
if you do adaptive designs, you'll have a small much overall sample size. sometimes you have a larger overall sample size. but at least you're able to do it in one trial. a lot of these have kind of a run in period and you start to
analysis data continuously or at 6 points. -- fixed points. for any adaptive study and there are like 30 plus different versions, you need to be clear. what is being adapted? the number of people in each study arm? is it something about the
randomization? like the characteristics of people, the intervention themselves? when are you going to adapt it? and based on what evidence does this adaptation take place? who decides an adaptation is needed?
and how is it implemented? so this is a slide from one of paul's lectures that he got from paul gallow, who is in the pharma working group. basically, the idea with adaptive designs is this is a clinical study design that uses accumulating data from your
trial to decide how to modify aspects of that same trial, as it continues. but the trick is you've got to do this in a ways that doesn't undermine the valty integrity of the trial. now, if you look at my employers work on this, we'll also say an
adaptive design is defined as a study that includes prospectively planned opportunities for modification of one or more aspects of the study design and hypotheses based on analysis of data usually interim data from subjects in the study.
one of my studies, they wanted to do adaptive study. we thought it was a good idea. then we found out basically had to follow patients for 3 years before they had any good on their outcomes. they were going to enroll over a 4 year period.
so the question was, what types of adaptations made sense? like maybe we could look early to decide in fact patients should finish the trial, but we had to actually get some of that long term data in order to make that determination. you have to think about what
adaptations make sense. does it also make sense to adapt randomization, might make sense to stop early. so you have a adaptive randomizizations, adaptive dose findings where we may turn on and off different doses based on characteristics we're seeing.
drop the loadser pick the winner. you have to be careful, there is really bad examples where they made a decision to drop a study arm but that only -- that arm only had one patient in it. it was the patient that was the sickfulist of everybody.
that's kind of a problem. we also do these adaptive seamless phase 2 and 3 trials. biomedicaler adapter trials. sometimes based on the bio marker we may put you in different study arms or may change your study arm. you have sequential methods
realistically that group sequential parallel designs that i talked about at first is basically an adaptive trial. we also do these sample size recalculations. we'll talk about variants and issues like that, how you can use to -- use -- a lot of folks
like adaptive trials. this is not nilly willie t rules have to be prespecified in the protocol. changes are made by design. this is not ad hoc. because you see something and you want to make a little change.
this is not a way to finance a badly designed trial. fix a badly designed trial. going down the tubes, you want to fix it. that the oags a salvage operation. not an adaptive design. adaptive designs require a lot
of understanding. they are hard to do for investigators, reviewers, dsmb members, journal editors. not all statisticians know how to do all of them. there are a lot of advantages and disadvantages. you actually -- while you have
flexibility, it comes at a price. you need a lot more quantification of statistical risk. you have to understand more information to actually plan these adaptations. a lot of them happen and they
happen bated on statistical rules. it will be following the data and make a change. you don't make a decision that it should make a change. that means you have to know well enough what might happen. you also have these covariate
emballs. i mentioned how the confounders are not a problem unless you have aan captive randomization. this is part of your problem. it's a lot more work up front. but they can be very useful. if you have the information. your big negative for any trial,
though, is that whenever you make a decision to continue or to make a change, that information about the study my be provided to investigators, the public, investigators, when you have a data safety monitoring meeting and decide to continue a trial, it can change
stock prices. that is very sad but very true and problem that we have today. so enrich enrollment design. this is aieriant of cross over end of one studies. n of one is when i take a patient and randomly decide when to assign them.
expanded cross over study but with a given patient. enrolled enrichment designs i try to identify potential responders to the treatment. i enter the responders into a second prospective comparison and people think this is great. i have a better chance of a win.
accept this is not generalizable to your general patient sometimes, though, clinicians tell me, well, it actually is. i actually try my patients that they seem to be responding. we stay on the drug. if they don't, i switch their drug.
well, again, you need to think about every clinical trial within large situations. how are you going to actually implement this therapy? and how can you work that implementation process into your actual trial structure? results tend to not be
generalizable. you get this thing called regression to the mean. so the problem when you try to enroll, let's say i have a hot flash and i want to enroll people that are having hot flashes to see if he can decrease their number of hot flashes.
problem is i will get -- they will be having ten hot nashs a week. i put them on the main study. randomize them, and like in my control group i'm seeing 2 hot flashes a week. that's because a lot of times we enter trials when we are fairly
sing sick. it's a natural eastbound and no for a lot of -- eastbound and know for a lot of disease. then we go back to normal low, and now my people that are trying to analyze the date say we don't have enough volunteers. i saw this -- events.
you see this happen in a lot of straining studies. herpes studies, they're having these outbreaks, now they have none. good for the patients, not good for my study investigator. group or cluster randomized we have a unit of randomization
that's not the -- normally when we randomize, we richmond the individuals in the study. but i want to randomize an entire school, and give all the kids in that school an intervention, i'm going to randomize a community, vaccinate everybody in a community, for
example, if i'm going to change practice within a clinic, and then observe what happens to the individuals who are the patients or providers in the clinic, then my unit of randomization is the school or the community or the clinic. it's not the individuals inside
of it. now, this could be really important because sometimes you're trying to make a change where i can't give pamphlets of information in the waiting room to one person verses another. if we're in the same waiting room they can all peculiar up
pamphlets. sometimes providers say it's hard to change my treatment across different people. when this is an open study. so other times also like we were looking at charges for bed nets, this was a group out of mit. every one said you need to
charge so that people are feeling like they're empowered, they spent their money, use the bed nets. and a couple of female economists were sitting around looking at what was going on and they're like, no. i can't remember their names but
they gave a real great talk on and so they actually randomized different clinics, different pricing structures. some charged none, some charged different prices. and then they looked to see how many bed nets got pecked up. or sold.
and then how many got used. then how many got used appropriately. but overall they said all the other economic analyses, look at all those. what we care about is infant malarial cases. they looked to see what the
infant malarial case count was. people who buy a bed net are more look likely to use it. but so many more people pecked up the free bed nets that that, in fact, was what lowered the infant malaria cases. what they also did is they went and visited different houses,
they noticed these bed nets take up the entire structure. and people were decorating them. they also heard that one of the main reasons people didn't put them together, they still didn't understand the instructions. to their credit, those investigators thought there was
instructs and tried to put together the bed net, they couldn't figure it out. they came up with the equivlent. if you know the store ikea, they came up with ikea picture instructions, they took the instructions with some folks, figured out how to help them
understand it and they also made prettier bed nets. this is your central financier in the house -- fixture in the household, making it pretty. so pretty bed nets for free verses standard bet needs for free. it lowers infant malaria cases.
you can test pretty much anything. you have to find the place and make sure it's a worthwhile a few more things about study design. your number one question is how good is your primary research question?
at the end of the day when your research is done and your data is analyzed, will the answer regardless of what the answer is, to the primary research question, advance scientific knowledge or clinical practice? if it does not advance scientific knowledge or clinical
practice, if it does not at least lay the necessary foundation that's missing to advance it, you have failed and you don't have a good primary research question. i'm not saying that you answered the question the way you hoped it would be answered but does a
failed -- if you get the exact opposite answer, does through still advance science? it's an important thing. if you get a negative result, not that positive result you're hoping for, that's still the second most important element is good primary
outcomemisher is columnly meaningful and simple. i now have to write these end opponents for studies into labels that the american public can read. and that the preventing describers can understand. -- predescribers can understand.
it's hard to interpret these endpoints. it makes sense to combine this together, but as you get into the discussions of measures and endpoints in this course, you'll realize it can be very hard to actually express it and explain it to somebody.
i want to give a little bit more information how you want to start designing one of these lovely studies we were just talking about. you've got study aims, your background, rationale. then you'll talk about the endpoints, the outcome
variables, any assessment you're going to take on the people in your clinical trial, the animals in your trial. whatever is in the trial. what are your assessments. be specific, don't tell me you're pleasuring sleep. -- measuring sleep.
how are you measuring it? think about this specific elements, is it sensitive is to change? do not take a wooden ruler and try to measure my waist line. that is not a good way to take the measurement. you want something that's
reliable. you want something that's valid. you need to know this category about those measures you'll use. because the measures are what you're going to use in some combination to get that actual final endpoint. think about inclusion exclusion.
can you measure these things on the people in your study? we'll talk about this and wendy webber will talk more when she talks about protocols, too. you have to make sure people are legal to be in the study, sometimes i see so many exclusion criteria, i'm like
there is nobody left in this world that can be in your study. so you've got to balancing, i think about safety. can they not ethically be in your trial? and after that, you might want to let everybody else in. how do you start designing,
think about that accrual plan. think about the prepare torre tests. what's the timeline for your overall study and for the individual participants? do they have to come between 9:00 a.m. and 4:00 p.m. on days they're likely to work?
that is going to make it a big problem. someone is raising their head. yes, that's right. it's a problem. treatment. what are the participant implications? i was in a trial and i couldn't
take certain antibiotics, so of course i get sick. and i need an antibiotic. so there was a back and forth about what drug i could take. but you have to think about implications on your participants. what is the exact product, the
exact dose, what about the quality, how is it administered? can you reproduce the whether it is a drug or whether it is yoga, can you actually reproduce that intervention? so if i say yoga, what type of yoga is it? happening i doing it once a
week, 6 times a week? what's going on? does it interfere with patient management? again, kind of my being in that trial interfered with my doctor trying to give me a medication. generalizability, often lost in this quest for specificity.
so you have to decide and balance. different parts off your scientific knowledge you may decide to have more or less generalizability. specify the criteria for withdrawal from studies or a deviation from the protocol
definition. if they do not show up, at exactly day 72, does it matter? or is it day 72 plus or minus 7 days. what are the windows? sometimes you have to be pretty precise, sometimes loser. sometimes someone may need to go
off intervention but do not need to leave your study. you can follow them and take the assessment. you want to have a list of all the concurrent medications, procedures, et cetera that are prohibited, permittedened and how you're going to record them.
do remember it's not just medicine. these people are drinking tea, taking supplements. they are taking over the counter medications. do you want them to still do tai chi or go to penning class if you're studying yoga?
you have to think about all the different things they could be doing that might interfere with what you're studying. but also you have to think about sometimes people are going to get headaches, and what are they going to take? don't tell them to disavow and
not do anything. you'll get nobody in your study or they'll do it any way and not tell you. what is the dose? might be the number of sessions, pills or treatment. could be social media. some of these studies trying to
actually tell you, like already, we want to give you information about your cigarette smoking. well, do i give you one text message a day, send you 6? to try to keep you so that you're still -- you have not started smoking again. what is the amount?
frequency. do i go to the chiropractor once a week, 7 days a week, how frequently do i go? and how many weeks. >> and how much time? this was a study they looked at 30, 60, 90 minute massage. once, twice, three times a week.
we're not going to study 3 times a week 90 minute massage. people will not go to that. there have been these very interesting dosing studies, not just of drugs, but a lot of other interventions. if you're talkings about a lot of psychological interventions.
how much practice, what do people need to do outside of a classroom? who is the leader? who is the surgeon? who is that person, how much contact is there. how well trained are they? there are a lot of combinations.
you can only test of these possible doses. it's important to look at it. as i mentioned at the very end you have to think about your practitioner impact. there could be a lot of false negatives and positives. that come up here.
because we have some people that are just -- good things happen. but sometimes you say listen much, i'm doing a proof of principal study. if the best massage therapist, really well trained, trains overs, if that person cannot give a massage that improves low
back pain, probably nobody can my best surgeon cannot fix this, maybe nobody can be trained to do that. so sometimes we actually go for really well trained folks in a proof of concept modal, and then broaden that out. but again you probably want to
choose technique that can be general lied if you're going to but other times such as the pearlman studies that were hooking at massage for asteo arthritis of the knee, he choose swedish massage. he said that's what everybody is trained in the most of the
countries he was looking at. that's the foundation that they have. we want to build on that. so the study analysis you have this mechanistic proof of concept. we sometimes call these the protocol analyses that we do
because we only use the patient that behalves. the study subjects that do everything we tell them, we'll analyze them. that's along the idea in the perfect world what might be expect? but generally, just like these
intent to treat analysis, that's everybody. you tell a patient to take a drug, they don't necessarily take it. you randomize a patient to a study arm, they don't necessarily comply with your so intent to treat verses these
different completers analysis. this down to what is your data analysis population. we could say i.t.t., or intent to treat, once randomized, always analyzed. observational trial we're like once you're in the trial we follow you in the trial.
you assume that all study participants are adhering to your study regimen and they complete the study. so they -- you assume they behave perfectly regardless of what they do. most of your regulatory agencies will say we expect you to do
intent to treat analysis. most high quality research regulated or not, we assume you should be doing intent the treat but then people kind of went askirt the edges. they do something called a modified or mitt analysis. modified intent to treat.
they may only include patients who start the intervention they're assigned to. they include the student subjects, who start the well, if i randomize you to go to psycho therapy verses not, you decide you don't like what you're randomized to and you
don't start, that's still telling me something. so should i really throw those people out of my analyses? depends on the question you're trying to answer. sometimes people say if they don't make it to the first or the second post baseline
assessment, then i don't count well, again, you can't really compare both groups of patients then. your study subjects may no longer be comparable. you have undermined the so that's with a problem with modified intent to treat
you have to be very careful with those. but sometimes we do these completers, that again you're only dealing with the well behaved and that's a problem. so as john described last night we have this kind of superiority in equivalence.
i wanted to put this in a equivalence or no difference. everything is inside this little tiny realm, kind of in the middle. like where this is no difference. this is a normal distribution curve.
and this line is the middle is the 0. the middle of the bell curve. so if you're in a tight area here there is no difference between your study arms. if you're in the orange areas on either end you have superiority. when i test two arms, i could
come up with a value along this and if i'm far enough out, that's saying that basically, these two groups are statistically different and probably not just a random error. non inferiority, they could be a little inferior or they could be
superior. gives me a huge area to be in. so what are the comparison groups i might use to try to get there? well, i've got experimental interventions verses control. sometimes i'm in epidemiology just because it's a control
group doesn't mean you have a randomized control arm. sometimes comparing the exposed to unexposed. so at the world trade center site, they took people who were doing cleanup there, who had been exposed to that work and compare them to a group of
people not doing cleanup, did not have the exposure. that is a study that the center for disease control prevention has been running. we may have various levels off exposures that we want to compare. men verses women.
common comparison you see. the old verses the young. maybe bmi over 25, to 25 and under. you have the usual standard of care, practice, standard of care, not always that standardized, by the way, though.
and sometimes we're doing this history. sometimes do i prepost. i take somebody's baseline measures, intervene on them and look after ward. or maybe it's natural history. i look, when i diagnose them and see how they've changed over 12
months or 2 years. when you talk about placebo, standard of care, attention controls, you might have some type of experimental treatment that you might offer support of care, or some other current so support of care is not no care have it's not a true
placebo. if you have a yoga intervention, what should the control group be. maybe exercising or stretching, maybe it's cooking class. you want people in a group, not really interacting with each maybe it's a book club because
you want them to actually go someplace and be someplace for 90 minutes. maybe you do nothing. maybe you say, again, proof of concept trial. does anything change if they go to yoga? sometimes they'll do something
called a weightless control. we say we want you to do nothing new for the next 12 weeks. and then you can do the yoga. but controls cost money. you will see in the sample size lecture, you have much higher sample sizes. when we have control arms.
you want to control everything except the smallest element of the intervention you want to be careful, however, it's not too small of a difference. i had this one meditation study, and they really wanted to test the mindfulness aspect of meditation.
and controlled for everybody else about the meditation. everything else about the meditation. this is a very small difference. and very hard to do. like why don't you just test like the meditation as a whole versus something?
but there are consequences when you have more control imposed, you can have larger sample sizes. you may miss a difference. if you have a less sensitive outcome measure you may not pick that difference up. so plan accordingly.
what are the differences? again, you've got to think about every difference between your study arms, basically defining your intervention. if somebody is spending one hour a week with your study participants verses 3, that is something that is different.
if you tell people to spend 15 minutes at home, working on that meditation verses 60 mens a day at home working on it, again, this is defining your intervention and so when you say to me if you just have 2 study arms against each other, well, i need to figure out if it's the
participant contact time or the time they spend at home and say, well, one had one hour of contact and 15 minutes at home, the other one had 3 hours of contact, 60 minutes at home. i can't tease it apart, folks. you have to plan all your interventions to figure out if
you can tease that out. so your control group might be placebo, standard treatments, always make sure that these two bullets are well defined. you'll have to record it during your study. most accepted prevention are you going to do an hiv
prevention intervention, not talk about condoms? what is it that you're going to do? again, usual care. not all that usual across sites. you have to record what it is. what are the accepted means of detection?
what is that diagnostic test? if i that pap smear for cervical cancer, am i going to get the same results as if i swipe with vinegar and then do the test? non disease population. sometimes we compare disease to non disease populations. especially to try to figure out
special differences. all control groups, especially when you're doing intersensal studies, need to be ethical. if you're going to assign anybody to a group, anyone meeting the study criterion has to be able to be in any study group.
that's not the case. you need to make sure your randomization that figure that out. i set up algorithm, folks. i liberal know asern medicine that you have imparted to me in setting that up. inclusion exclusion criteria
should keep people from being in study arms that you know are i don't think for them. if you have questions about that, chucknateson has a whole lecture on mistakes that have been made in randomizizations. standard of care, is it really stander?
controlscannot always be masked. somebody asks this every year and it's now on the slide. try, you may not be able to do it. people do tend to be better after receiving any type of therapy placebo or not. care matters, do not
underestimate that when you plan your trials. comparing population incident rates, the beginning of programs. does not take into account a lot of factors. i had someone, just do prepost. well, if you take into control
the control group, while they're prepost, the post looked worse than baseline, everybody else looked a lot worse than that. so if i hadn't had that control arm, you wouldn't have known that they actually salvaged the slide. everybody else is going like
this, they're all going downhill. they kind of -- they went down, but not by as much. it's important to understand no control group, you have a lot of problems, researchers an participants tend to interpret findings in favor of new
treatments, investigator and participant bias. when you don't have randomization, you cannot distinguish effect and time. if i wait ten to 14 days we will all get over our colds. so what's the right control there is no right control group.
you just have to choose one. again remember, control groups may happen in non randomized we'll talk more about this next but you have to consider all your effects, positive, negative, your effects going to plateau, when do i measure things?
are you looking at long term differences? going to be thater oags going to see a change, it will strict trickle weigh. delayed response, do i have to wait 6 months before i see a got to consider all this stuff when you're planning a trial.
so ten is my favorite confounder in uncontrolled studies. you have challenge dropout between study -- differential dropout between study arms. bone density changes throughout the year. you have to make sure you measure them on the 12 month
mark, not 9 months. social support. empathy in talking to people really matters. exercise, we know control stress, cardiovascular risk factors, a lot of issues, you see immune responses with all of these.
so what's your study about? what do i need to control and not control? when in doubt when you can ask nobody else, mask the people correcting your data, to the hypothesis of the study. but you need to specify in your protocols who is masked, i
can't, how, and to what? -- why, how and to what. if i break my ankle and i'm in a medical product study, the pi may not need to know what stowed arm i'm in. the safety officer might need to know, the person doing the surgery might need to know if i
have to adjust my anesthesia. a lot of people don't need to know. everything about blinding is this idea like playing secret spy. need to know information. all studies should be reproducible.
regardless of your study design. regardless of your study. you need a well defined study well defined inclusion exclusion criteria. if i had 7 people in the room you should all decide the same thing if subject a should be in or out of the study.
needs to be well understood. the study -- someone is having fun in back. study conduct needs to be well described, how are you going to do your study? and if somebody is injured and has to leave quickly, someone has to walk in, are they going
to walk in and do it the same way? every one from the nurse coordinators to statisticians to data managers. you have to know that the labs will be processed the same way, that you're going to get people from the same place, and that
every little step of that study will follow the same way. can i reproduce the outcome measures? how are you collecting those? how are you doing it? we'll talk about that later. and also that data analyses. there are a lot of potential
biases in clinical trials and a lot of potential remedies. so think about these. somebody says here is a problem, well, try to find your solution. and you also have to think, we have the bias of who is in the study anyway. you're trying to generalize to
this beg lighter box over here. realistically, you have those that are interested in participating, meet the criteria, consent, and get so a much smaller group, may not be as representative as you'd like it to be. that's part of why it's really
hard work to be a trialist. all of our studies aren't gold but we can shur try for it. so some conclusions. in the next minute and a half. what is the question? you've got population or disease, p. so some people call it pico.
the intervention or variable of comparison group, outcome and time. and you want to write this sentence. this is an example. like in this population, how does this intervention or variable of interest compared to
this control influence the outcome during this time period? phrase, every single study question like this. this is your study summary. i ask you for a one sentence summary of your study, write it like this. if you cannot fill in those
letters you have a problem. the other part of your study question is who cares? about your study question? other than you. your question is always going to come first. but you've got to consider the questions you want to ask, the
hypothesis irrelevant oags trying to test. i have to turn your question into something testable. what are the key factors. those ethical issues, constraints. what can be said, maybe i need multiple control groups in order
to actually answer your questions of interest. so i'm going to stop there for tonight. we'll put up the other questions. if you have articles you want us to discuss, issues that you want us to discuss let me know as
soon as you can on the chat boards. we'll try to work those in for the later few months of the course. we'll talk about effect modification next week. thank you very much. have a lovely week.
take care.
No comments:
Post a Comment