Evidence

Home / Uncategorized / Evidence

I have been watching the dialog about the efficacy of the Course Signals results with interest. I give a tremendous amount of credit to the Course Signals team as I think they have been a positive catalyst for activity in higher ed analytics over the past 7 or so years. I also think it’s healthy to have discussions as to the validity and efficacy of results. If done in a constructive fashion, it will only further the cross-institutional learning that’s happening in our space.

The reason I started Blue Canary is that I wasn’t seeing enough practical implementations of analytics that produced reasonably sound evidence of positive student outcomes. Hence, this discussion about Course Signals is salient.  Like the e-Literate team, I have also pointed to the Purdue project as one of the first and most sound example of proven/positive outcomes.

To that end, I’d like to use this blog post as a forum to illustrate another analytics project that has strong evidence of positively impacting student success. The project was a retention analytics initiative that our team developed when I was the Director of Academic Analytics at the University of Phoenix. I presented a summary of this project at the Educause Learning Initiative conference back in February 2013 (slides can be found here). Here’s a summary of the project:

Framing the Model

The predictive model addressed the question “will a student attend class next week”. At Phoenix, classes are 5-, 6- or 9- weeks long, so weekly attendance is important. We made the assumption that missing attendance was a proxy for attrition – that is, if a student didn’t attend class in a given week, that is a leading indicator for dropping out. Incidentally, this assumption held true for online classes at Phoenix, but not for ground-based classes. That’s a good example of how assumptions need to be validated from institution to institution, but also within different segments of an institution.

The Intervention

Unlike Course Signals, the Phoenix Persistence model focused its intervention through student success counselors, not directly to students or faculty members. Phoenix already had a staff of Academic Counselors whose job it was to support/assist students. By giving the counselors timely information (what students might not attend next week) and supporting data (what are the indicators that make the model believe the student won’t attend), the team hoped that the counselors would be more effective at their jobs of retaining students.

The Methodology

The team attempted to do a controlled experiment to determine if utilizing the model would improve retention. We chose a group of 30 Academic Counselors and split them up into two groups. The split was done manually to make sure that the two groups were balanced (similar tenure between each group so we didn’t have differences in performance). Ideally, a random split would be best but there were some practical issues that forced us to manually split the teams. Next, we got all 30 counselors together and told them they were part of a pilot program. Some would be seeing new data about students and some might not. All of them would be expected to do the job they have always done – help students. We included all counselors to minimize the Hawthorne Effect on the pilot. Both teams were then tracked over a 5-month period

The Results

The metric tracked was a Phoenix-specific retention measure that had been in use for a long time prior to this project. What we saw was that after two months using the model, the counselors using the model saw their collective retention measure separating positively from the retention measure of the control counselors. At the end of the 5-month period, the pilot/test counselors had a retention measure that was 60 basis points higher than the control group.

Phoenix Persistence Results

I’ve gotten reactions from folks who say “only 0.6 percentage points? That’s not much of a boost.” Context is important in addressing this. The Phoenix counselors were already focused on retaining students. That was their job. The model showed a 60 basis point improvement over that baseline. A good analogy might be a new high tech golf club that straightens and lengthens your golf shots. If you give this club to a professional, you might only see a slight improvement (e.g. 60 basis points) since the pro doesn’t hit too many bad shots. However, give the club to an amateur and you would see a larger relative improvement since there would be more errant shots to straighten out.

The takeaway from this example is that there are more examples of effective analytics projects than just Course Signals. In my work at Phoenix, my collaboration with the PAR Framework, and the work that the Blue Canary team has done, I know that there are many institutions who hold the strong belief that their data contain valuable insights into students.  It’s just a matter of effectively converting the data into useful information and then creating a positive intervention for the student.  I hope that the discussion and scrutiny of these kinds of works will continue so that the collective knowledge in the space can mature.

Leave a Reply

Your email address will not be published. Required fields are marked *