Jim Ritchey at Delta Initiative posted a nice item about early alerts in Higher Ed. I think he’s spot in as to the questions and issues he raises. To that end, I wanted to break down his post and respond from Blue Canary’s perspective. Blue Canary has recently launched Lighthouse — our early alert retention solution. Read on for some insights:
“One solution component where it is difficult to evaluate benefits is early alert systems”
- Difficult, but not impossible. It’s hard to measure the benefits, but if you are adopting a solution, your vendor ought to be willing to work with you to show evidence of success
“Organizations are learning that how the score is calculated and what data the calculation uses is critical to effectively matching the potential students with the appropriate proactive support”
- Yes, the score does relate to the support, but it’s not a prescriptive task. I’m wary of alerts that claim to be causal (the alert gets raised because of a symptom, not because the underlying issue has been identified). Please feel free to prove me wrong
“Are the scores based on data entered by a user or is the solution providing an analysis of data to calculate the score?”
- GREAT question to ask. Blue Canary believes that the institution should mine existing data for alerts. If you have to ask the student, you’ll have to ask them multiple times and at that point, you’re essentially offloading all of the heavy lifting to the student.
“If the system is analyzing data, is it the transaction data for the student, predictive, or a combination of both approaches”
- Ideally, the system should analyze the transactions (current signals) and then make a prediction (based off of past training data). We believe that’s the best way to harness the value that exists in the student data
“If the data is entered by a user and matched to student transaction data, how is the user identifying the value of the data? Is it based on a predictive model or is simply the identification of organizational beliefs about the students needing assistance?”
- Another good comment. There has to be some tie between the data and the desired outcome (the dependent variable in the model). Usually, model training will shine light on the relationship. Anecdotes and simple rules are a good place to start, but hopefully, that’s not what the school is paying a vendor for. I dug into this a little more in my blog post about ‘Secret Sauce‘.
“Does the system provide analysis of the success of the proactive support programs? Once a potential student is identified and the organization takes a proactive action, how is the success of the action evaluated”
- Measuring the efficacy of the model is one thing. Measuring the efficacy of specific interventions is a totally different exercise. For this, the institution needs a whole lifecycle of data and a good CRM system:
- Start with a history of student transactional data
- Use it to build a predictive model that spawns early alerts
- Act on the early alerts and track the interventions (and results) in a CRM system
- Build up a history of intervention data and analyze that for effective patterns
- Numbers 1 through 3 are challenging enough. I believe number 4 is a bit Holy Grail at this stage in the game (in Higher Ed), but someone will get there soon enough.
Kudos to Jim Ritchey for a great post. Thanks for helping to continue the conversation around analytics and early alerts in Higher Ed.