Welcome to Engaging Consumers, Providers and Community in Population Health Programs. This is lecture c, the objectives for this lecture are to evaluate individual behavior change interventions' designs. Evaluate organizational behavior change interventions' designs, evaluate community-level behavior change interventions' designs. What is the importance of program evaluation? Well, this is a field of study designed to answer whether or not an intervention did what it was intended to do and whether or not it made a difference. These evaluations can take on several forms including formative assessments, process assessments, summative evaluations and cost-effectiveness evaluations and we'll go over each of those in just a few moments. And more and more we see the people engaged in population health are demanding what we call evidence-based programming, and this start's with literature reviews. Finding examples from other communities etc. We've covered that in other sections but it's worth mentioning that you review those elements now. So what are some the key issues when you're evaluating a behavior change model. Well, how is the intervention expected to achieve its outcome? So we can actually look at some of the process issues. Who is the target population? Occasionally, we'll make an intervention or program and it helps some people. But it may not be the people we intended to have benefit, and that may be okay, but it may not. Does the evaluation focus on those enrolled in the particular program? Sometimes it's difficult for us to maintain contact with the so called target population that we're trying to work with. What studies will be used to evaluate impacts? In other words, are we going to survey the individuals? Are we going to allow them to self report their outcomes? Are there some other forms of objective measure that we might be able to employ? And that goes to number five, what are the measures of a program's success? They should actually be laid out explicitly prior to the program's implementation. Otherwise, we tend to have a few things happen during a program such as mission creep or expectation expansion. So having your measures for programs success laid out prospectively is important. Also, what are the available data for answering questions about behavior change? And how can we use health information technology to inform those programs? So let's talk for a few minutes about the importance of program evaluation, and what the evaluation should focus on. The smallest frame is the individual but evaluations can also focus on organizational members, target populations, communities, or a variety of other modes defined by predetermined specific factors. And we have what we call program-based evaluations, and those are only for the people that are actually being touched by the program versus a target population. This is a rather subtle difference, and one explanation is that we'll often try to enroll a large number of people from a population, but we might not get all of them engaged in the way that we hoped. And in fact, one of our early indicators will be program enrollment prior to our population-based evaluation. Because often times, we have something called herd immunity in public health. Where we believe if we can positively affect even a small section of the community or population, that the population as a whole will benefit. So these are important differentiations, and things you want to think about. Again, strong programs tend to draw on the theories of behavior change. We've talked in other lectures about the health belief model, the social learning theory, the theory of reasoned action, etc. We haven't covered the extended parallel process model, but this is something used to deal with fear in the community, and in communities that have bad relationships with their local governments, the police force, or healthcare delivery organizations. Those can be important models to incorporate. The other skills you need at some point are those that give you the fundamental ability to mapped out the programs conceptual framework or logic model. Those will be discuss in other parts of the program as well as throughout the series. The most effective behavioral intervention often work at multiple levels. The social-ecological model shows that individuals are far more likely to work toward changing behavior if their social and physical environments support behavior change. Again, many programs by their very nature are designed not to do just one thing, but to do many things in order to improve the quality of the experience in that community. We've looked at the example of bike lanes. Well, while creating bike lanes is one thing, you'll actually want to do several other activities alongside that. You would want to do educational campaigns for both cyclists and motorists about how bike lanes operate. You may wish to have some community events that promote bicycle safety and awareness, to insure that people are wearing helmets, reflective garb, etc. So you may have several parts to your program, because you don't just want people biking, you want them bilking in a safe and productive manner. You may also have to increase the number of things like bike racks at the end. You may have to engage in structural interventions. So if you're putting in bike lanes, you may have to change some of your traffic laws regarding how bike lanes and cars interact. Environmental interventions. These are where you're hoping to change the surroundings. For example, when Vancouver, British Columbia put in their bike lanes they actually did a lot of other things to support those bike lanes including physical barriers to keep the traffic and the bikes apart. They planted nice greenery in the medians separating the cars from the bikes. And that was viewed positively by both constituents, the bikers and the motorists. So that's the environmental feature. Suppose a company wants to promote physical activity and parking around the office is expensive. So they decide to give ten free parking passes a month to people who bike to work. Understanding that some days it might rain, or that employees may sometimes have other obligations that require them to drive, that would be an organizational intervention. The interpersonal interventions attempt to reach clusters of people, in our example, these are people who are willing to bike. Interpersonal interventions generally involve health education. So inter means across people, intra refers to the individual level. You can remember this distinction by thinking about interstate commerce, moving goods and services from one state to another, and intrastate commerce, moving things around within one state. So a few other things about program evaluation for behavioral change. The gold standard for any research study is experimental design. The way most people hear about this is in clinical drug trials, where one group of participants will be given the drug. A second group will be given a placebo. And a third group may get no intervention. So you'll have multiple, what they call, arms of the study. And better still, there's an experimental design called a randomized control trial, where the people will not know which treatment they're getting. And the people who receive no treatment won't even be aware that there is a treatment option available. A randomized control trial is ideal because it reduces many forms of bias. Non-experimental designs are very common. These are where you can control only some of the factors in the environment. So you can't have a true blinded study. So, back to the bike lane example. You may be able to put bike lanes in a particular corridor of your city. And you'll see how people who live and work along that corridor are more likely to engage. But that won't be randomized, right. Because those people have to be living or working along those bike lanes to have access to them. So that's really not experimental. But if you see it as successful in that part of your city, you may expand it to other areas. And there are quasi-experimental designs. By the way, if you're planning on doing a program intervention and you don't have a lot of experience with research design or statistics, you should certainly engage your local university or health department. And find people with the expertise to help you with the experimental and research designs as well as with program development. Quasi-experimental designs are slightly better than non-experimental designs and this is where you actually will try to randomize people again. Lastly, you have observational studies. This is where you'll do nothing in terms of trying to get people to behave a particular way. Rather, you'll just change some feature in the environment. With these studies, the analytic techniques become more difficult, and you will likely need help depending on your level of expertise. So why do these program evaluations matter? Well, in the social sciences and in all science really, we're very interested in cause and effect. And we want to make sure that making a change in the environment or in a program, the cause, has the intended effect. In this case, better population health. So we really want to make sure that people are getting better, because of what we've done, and not because of some other factor in the environment. We do it because one, self reported data are often less expensive, time consuming and intrusive to collect. Two, factors or than the intervention may influence the biological outcome. Three, the effect of the intervention may not manifest itself within the evaluation period. In the bike example we might ask, quote, how many days did you ride your bike to work each week on average, unquote. Then measure the difference before and after the lanes were introduced. Observational studies, by the way, tend to reduce the inherent bias of self reporting. Oftentimes, if people know that they're part of a study, they will report that they're doing better, whether or not they really are. This is called a Hawthorne Effect. And the Hawthorne Effect was named after a gentleman who did research and found that he wasn't getting the results he'd hoped to get. So we've talked a little bit about research design, now we're going to talk a little bit about program evaluation. Formative evaluation is very common. Formative evaluation involves primary data collection, and or secondary analysis about the target population to gather the following information. One, the epidemiology of the disease or health condition. Two, the persons most affected. Three, the drivers of unhealthful behaviors. Four, the barriers to change. Five, the persons considered to be most credible as sources of information on the topic. And six, channels through which the population receives information. So this is where you try to do primary data collection about the target population. And you get the sort of feedback that's qualitative in nature. Quote, I feel better about this because unquote. So when you wrote an essay in school, the teacher might have provided you with formative feedback in the form of comments or instruction on how to talk about the concepts more effectively. And that would have been formative evaluation. Formative research can include both qualitative and quantitative research. Qualitative research is particularly useful in understanding the mindset of the target population, including their values, attitudes, beliefs, aspirations, and fears that strongly affect behavior. And quantitative research is useful, especially where quantifying baseline levels is important. Process evaluation involves looking at how well a program is being implemented. And this is really very much in the minutiae of your early stages of success. This was phase six or seven of the PRECEDE-PROCEED model. But this is where you're asking question about your implementation of the program. Dose delivered, assessment of the volume of activity or intended units of the program delivered by the implementors. Reach, extent to which the intervention reaches the target population. Level of exposure, extent to which the target audience has been exposed to the intervention, e.g., number of channels on which they saw or heard a message on the intended topic, often labeled dose. And their reaction, positive or negative, to the intervention. Recruitment, procedures used to approach and attract participants at individual or organizational levels. Sociodemographic characteristics of participants in program activities. Context, aspects of the environment that may influence the implementation or study of the intervention such as spillover from the treatment to control area. And as mentioned earlier, spill over effects. You may see that other communities benefit as much or more from an implementation as the target community. And that's fine on some level. Another form of evaluation is the summative evaluation. Summative evaluations include increases in knowledge, risk perception or self efficacy. Changes in attitudes and stated intentions and most importantly changes in behavior. Back to the example of when you were in school and wrote a paper. The summative evaluation was the grade the teacher gave you. In other words, how are you doing overall? This is rarely possible for interventions that last only a few months or years. Instead, evaluators measure changes in psychocognitive factors, such as knowledge and attitudes, as initial effects, and self-reported or observed behavior as an intermediate effect. Summative evaluations tend to take the form of, quote, very well, unquote, Quote, well, unquote. Quote, neutral, unquote. Quote, poor, unquote or quote, very poor, unquote when put to participants. Also, along with summative evaluation is the outcome evaluation. So an outcome evaluation can by summative in nature and this is where we're actually trying to see if we've got the long-term output that we were hoping for, and that'll be necessary if we're ever going to make the link between cause, and effect. Another very important type of valuation and again, one that where we do recommend you consult an expert if your program is large enough to support that is cost-effectiveness. Say, we spend $5 million putting a bike lane in. Well, if only one person ever uses the bike lane, the cost effectiveness is one biker per $5 million. What you would hope to see is that you've got 5 million people or 5 million unique bike rides per year and you could say that the cost effectiveness was $1 per bike ride along this corridor. And you might say, compared to price of running a bus or putting in light rail or having people drive that $1 per ride is actually a cost savings. So this requires some fairly careful of both the cost, as well as other costs. In other words, what was the relative cost of an item? And this is where the bang for the buck story emerges. So, here is an example of a real world program implemented several years ago and it had to do with hand hygiene. Good hand washing behavior is one of the most effective means of reducing the spread of illnesses whether they be the common cold during winter or more severe illnesses, like infections in hospitals. And we know from many studies that if we can get people to wash their hands after they used the restroom, prior to eating, etc. That we'll have much better overall health status in our community. In this study, eventually published in the American Journal of Public Health, the researches changed the way the hand towel dispensers were set in their organization. In particular, we're talking here about automated dispensers and their hypothesis was that if the paper towel were dispensed prior to the person's arrival. In other words, you didn't have to wave at the machine in order to get a towel that they may wash their hands more regularly, because they see the wash station as ready for use. So they call that a visual cue and because they wanted to make sure that they were not making a mistake and that people weren't just ripping off that first towel and throwing it away, they also tracked soap usage. They were asking what's the chain of causality? Well, people will see the towel and know that the wash station is ready. They'll go wash their hands using soap and then pull the towel, and not merely be the same people who would have washed their hands anyway throwing away the first towel. So, how did it work? Well, they used eight bathrooms at the Bryan School of Business in North Carolina, 4-male and 4-female with a total of 16 towel dispensers. In other words, two towel dispensers per room and they did it in alternating weeks with the towel presented or not presented. They also track traffic passively. They weren't taking pictures of people in the bathroom or anything, they simply counted how often the door opened and shut to see how many people went in and out in order to get percentages on their towel dispensing and they recorded the towel and soap usage at the end of each week. So, this bar graph shows how many ounces of towel were used per hand washing opportunity per 1,000 people for each of the 10 weeks in the study. Weeks with the towel displayed are represented by the solid bars and they happen to be even numbered weeks, and the weeks when no towel was displayed. In other words, when you had to wave your hands in front of the machine to get a towel are represented by the striped bars. The graph here shows very clearly that far fewer towels were used when the towel was not displayed. This bar graph shows the mean weekly ratio for towel consumption to restroom users for No Towel and Towel conditions. Again, the towel status was significantly greater. Almost 14% greater. Meaning that 14% more people who walked into that bathroom were washing their hands than in the prior experience. As mentioned earlier, they were concerned that people were simply tearing off that first towel and throwing it away, because they feared it might be dirty or for some other reason. So, they also checked soap usage. This bar graph shows the mean weekly ratio for soap consumption to restroom users for the No Towel and Towel Conditions. Use of soap also went up, so the researchers were fairly confident that the visual cues stimulated more people to wash their hands. So, this is a very simple example of a program done by a small number of people over a ten-week period. If it had been more elaborate study, it would have been interesting to see if the students in the university had reduced incidents of colds during that period if we could have been done for the whole university, etc. And that would have been a lagging indicator of overall health status. This concludes lecture c of Engaging Consumers, Providers and Community in Population Health Programs. In this lecture, you learned about the importance of program evaluation. In particular, we evaluated individual behavior change, interventions, designs, organizational behavior change, interventions designs and community level behavior change interventions designs.