• There are organizational policies and procedures in place that fit Michael Quinn Patton’s description of a fully integrated and highly valued internal evaluation system
• Evaluation is institutionalized. Every program, department, project, etc. has a logic model, collects their own data, and uses it.
• The logic models are made with SMART goals in mind
• Programs collect quantitative and qualitative data
• Qualitative data collection, like interviews and focus groups, is culturally-competent
• Programs use formative and summative assessments to guide decision-making
• Formative assessments are woven into the program’s design – for example, an advocacy program in which youth to testify to local congressmen will need to teach youth about public-speaking skills, so why not design simple rubrics to assess their progress in their public speaking skills over time? Youth are hungry for feedback and love seeing their own improvement over time.
• Staff members take an initiative to monitor data without nudging from evaluators
• Evaluation results get used to improve programs
• Staff members demonstrate critical thinking skills when thinking and talking about their program
• Staff members demonstrate depth of knowledge when thinking and talking about their program
• Capacity building takes place on a day-to-day basis
• Evaluation is conducted because everyone wants to improve programming, not because a report is due to stakeholders.
• Curiosity about outcomes. The staff members start with some research questions, and then you find some answers in the data together, and the staff members have even more questions.
• Staff members comment, “These numbers are great, but how can we measure progress??” Staff understand that measuring outputs is not the same as measuring outcomes.
• Staff want feedback, especially individual feedback, so they can improve their work.
• Staff