As public health professionals, when asked why did you make this change or begin this new program, our answer is too often, it seemed like a good idea at the time, what we call the ISLAGIATT principle. This isn't inherently a bad thing. As professionals, we should always try new ideas that we think might improve public health. But given the finite resources at our disposal, we have a responsibility to investigate whether or not new ideas work. So when you're asked in the future why did you make this change, I want you to say because it works. But how will you know? I'm going to introduce you to Evaluation, what it is, and why we do it. Evaluation describes a toolbox of approaches that if employed correctly enable us to improve health and public health services, whether it's a new service or an existing one. More specifically and in the definition used by the United Nations Development Program, evaluation is a rigorous and structured assessment of a completed or ongoing activity, intervention, program, or policy that will determine the extent to which it's achieving its objectives and contributing to decision-making. In this way, evaluation shares characteristics with what might more loosely be called informal assessment. But evaluation is systematic and intended to inform implementation or improve service effectiveness. Conversely, informal assessment is more often opportunistic, both in what it examines and whether or not the information is used. Evaluation can be applied to any intervention whether an individual project or a broader program. But I'll use the term intervention throughout this course. So you might be thinking, why is evaluation so important for public health, lot's of other health professionals and social scientists run interventions. Well, that's true, but as stewards at the bigger picture. It's our role to ask the difficult questions such as, does this work? It's possible that scarce resources being allocated to work that's ineffective or even worse actually harmful. More often than not, it's our responsibility to design and undertake evaluation. It's a core public health skill, and that's why this course is critical for you as a student of public health. Evaluation comes in a variety of forms, but all of these types of evaluation seek to answer some common questions. These might include for interventions that you're planning to implement, will it work? For interventions you're already rolling out, is it working? For interventions that you might have piloted but are looking to scale up, did it work? In the previous e-tivity, you looked at two examples of what would commonly be referred to as evaluation. For the hand washing promotion in Peru, this was indeed an evaluation. This evaluation sought to understand whether or not child health as an outcome had improved following the promotion work: it hadn't. Yet the example of the infant simulator program in Western Australia wasn't actually an evaluation it was a trial, a type of research. So what was the difference and why does it matter? Well, evaluation differs from research in a number of different ways. While either approach may seek to answer the question did it work, evaluation is much more: number 1, contextual. We're only really interested in whether or not it worked in this setting whether it works elsewhere is beyond our scope. Number 2, collaborative. We want to know what worked well, and what didn't work well. Whether this is objective or subjective is actually less important as we really value stakeholder engagement both to collect information and deliver improvement in the future. So what do we need to know to answer the question, did it work? Well, we have to understand what the aim of the intervention was, and what the objectives were. If these weren't clear at the beginning of the intervention, then it makes evaluation more difficult. With the Peruvian mass media campaign, their aim was to improve child health. So in this case, the aim is relatively clear. In the evaluation, this was measured by a number of indicators which included the prevalence of diarrhea among children 48 hours and one week prior to measurement. Now, this is the first time we've used the word indicator. Indicators are vital in evaluation. We can define indicators as markers of accomplishment that demonstrate progress. There are three criteria that the Centers for Disease Control and Prevention, the CDC, apply to identify valid Indicators. Valid indicators are specific, observable, and measurable. We'll have time to look at this in a more detail a little later on, so don't worry too much about this now. But in summary, evaluation allows us to improve our interventions or refocus resource to higher value opportunities. Evaluation is a core public health skill and responsibility both in terms of advocacy and leadership. While evaluation can sometimes answers similar questions to research, evaluation is more contextual and more collaborative. But evaluation comes in many different forms and that's next.