Breaking down a post about Early Alerts

Home / Uncategorized / Breaking down a post about Early Alerts

blog-post-checklistJim Ritchey at Delta Initiative posted a nice item about early alerts in Higher Ed.   I think he’s spot in as to the questions and issues he raises. To that end, I wanted to break down his post and respond from Blue Canary’s perspective.  Blue Canary has recently launched Lighthouse — our early alert retention solution.  Read on for some insights:

[line]

“One solution component where it is difficult to evaluate benefits is early alert systems”

  • Difficult, but not impossible.  It’s hard to measure the benefits, but if you are adopting a solution, your vendor ought to be willing to work with you to show evidence of success

[line]

“Organizations are learning that how the score is calculated and what data the calculation uses is critical to effectively matching the potential students with the appropriate proactive support”

  • Yes, the score does relate to the support, but it’s not a prescriptive task.  I’m wary of alerts that claim to be causal (the alert gets raised because of a symptom, not because the underlying issue has been identified).  Please feel free to prove me wrong

[line]

“Are the scores based on data entered by a user or is the solution providing an analysis of data to calculate the score?”

  • GREAT question to ask.  Blue Canary believes that the institution should mine existing data for alerts.  If you have to ask the student, you’ll have to ask them multiple times and at that point, you’re essentially offloading all of the heavy lifting to the student.

[line]

“If the system is analyzing data, is it the transaction data for the student, predictive, or a combination of both approaches”

  • Ideally, the system should analyze the transactions (current signals) and then make a prediction (based off of past training data).  We believe that’s the best way to harness the value that exists in the student data

[line]

“If the data is entered by a user and matched to student transaction data, how is the user identifying the value of the data? Is it based on a predictive model or is simply the identification of organizational beliefs about the students needing assistance?”

  • Another good comment.  There has to be some tie between the data and the desired outcome (the dependent variable in the model).  Usually, model training will shine light on the relationship.  Anecdotes and simple rules are a good place to start, but hopefully, that’s not what the school is paying a vendor for.  I dug into this a little more in my blog post about ‘Secret Sauce‘.

[line]

“Does the system provide analysis of the success of the proactive support programs? Once a potential student is identified and the organization takes a proactive action, how is the success of the action evaluated”

  • Measuring the efficacy of the model is one thing.  Measuring the efficacy of specific interventions is a totally different exercise.  For this, the institution needs a whole lifecycle of data and a good CRM system:
    1. Start with a history of student transactional data
    2. Use it to build a predictive model that spawns early alerts
    3. Act on the early alerts and track the interventions (and results) in a CRM system
    4. Build up a history of intervention data and analyze that for effective patterns
  • Numbers 1 through 3 are challenging enough.  I believe number 4 is a bit Holy Grail at this stage in the game (in Higher Ed), but someone will get there soon enough.

[line]

Kudos to Jim Ritchey for a great post.  Thanks for helping to continue the conversation around analytics and early alerts in Higher Ed.

 

Leave a Reply

Your email address will not be published. Required fields are marked *