Back to Blog
Extended Learning

How to Measure Whether Your Extended Learning Program Is Actually Working

Most extended learning programs can tell you how many students are enrolled. Many can tell you the average attendance. Very few can tell you whether the program is actually improving student outcomes. This is a problem. Without measurement, you cannot improve. Without evidence, you cannot justify continued investment. And without data, every conversation about your program is based on anecdotes and hope.

Effective extended learning program evaluation measures outcomes at three levels: participation (are students attending?), quality (is the program well-implemented?), and impact (are student outcomes improving?). Most programs only measure participation. The strongest programs use a combination of attendance data, implementation observations, student growth metrics (academic and non-academic), and comparison to non-participants. A practical evaluation does not require a research team. It requires consistent data collection on a few key indicators, reviewed regularly and used to make program adjustments.

The three levels of measurement

Level 1: Participation

This is the foundation. If students are not attending, nothing else matters.

What to measure:

Participation data is easy to collect and should be reviewed weekly. A declining attendance trend is an early warning that requires immediate investigation.

Level 2: Quality

A program can have high attendance and low quality. Measuring quality means assessing whether the program is being delivered as intended and whether the experience is engaging.

What to measure:

Quality measurement requires periodic observation, ideally once per month. Create a simple observation rubric: engagement level, activity quality, staff-student interaction, and time on task.

Level 3: Impact

This is where most programs struggle. Measuring impact means determining whether students who participate in the program show better outcomes than they would have without it.

What to measure:

Impact measurement is more complex because it requires a comparison. The simplest approach: compare outcomes for program participants to outcomes for similar students who did not participate. Match on prior achievement and grade level.

Building a practical evaluation

Start with what you have

You do not need new assessments. Use existing district data: benchmark assessments, attendance records, discipline logs, and report card grades. Pulling this data for program participants and a comparison group gives you impact estimates without administering a single additional test.

Collect satisfaction data monthly

A three-question student survey takes two minutes and provides real-time feedback:

  1. Did you enjoy today's activities? (Yes / Somewhat / No)
  2. Did you learn something new today? (Yes / Somewhat / No)
  3. What would make this program better? (Open response)

Review responses monthly. Act on patterns. When students consistently say they want more hands-on activities and less lecture, adjust.

Observe quality quarterly

Four times per year, a program administrator should observe each program site using a standardized rubric. Rate: student engagement, activity quality, staff preparation, and time management. Share results with staff and discuss improvement strategies.

Report impact annually

At the end of each program year, compile the impact analysis. Compare participant outcomes to matched non-participants on academic, attendance, and behavioral metrics. Present the findings to district leadership and funders.

Use data to improve, not just report

The point of evaluation is improvement, not just accountability. Monthly attendance and satisfaction data should trigger immediate program adjustments. Quarterly observation data should inform professional development. Annual impact data should shape next year's program design.

What to measure

  • Attendance rate and trend (weekly review)
  • Dosage per student (total hours, target a minimum threshold)
  • Student satisfaction (monthly survey)
  • Implementation quality (quarterly observation)
  • Academic and behavioral impact (annual comparison analysis)

Common mistakes

  • Measuring only attendance. Attendance is necessary but not sufficient. Students can attend a low-quality program every day and gain nothing.
  • Not having a comparison group. Without comparison, you cannot distinguish program impact from normal student growth.
  • Surveying students once a year. Annual surveys capture a snapshot. Monthly surveys capture trends and enable real-time improvement.
  • Reporting data without using it. If evaluation data does not change program practice, it is a compliance exercise, not an improvement tool.

If you only do one thing this week: Identify the two assessments your district already administers to all students (likely a reading and math benchmark). Pull last year's results for your program participants and a matched group of non-participants. Compare the growth. That analysis, which takes about an hour, is the beginning of your impact story.

Get practical K-12 staffing insights

One email per week. No fluff. Unsubscribe anytime.