The Bastardization of Bloom’s Two Sigma

Home / Analytics / The Bastardization of Bloom’s Two Sigma

I’m really torn over this blog post.  One one hand, it can be an insightful commentary about how the goal of attaining benefits comparable to Bloom’s two sigma findings has been a negative catalyst in the higher ed tech space.  On the other hand, it can be an AWESOME title for a Hollywood screenplay treatment.  “The Bastardization of Bloom’s Two Sigma” is the story of an over the hill spy who comes out of retirement when he finds out the son he never knew he had has developed super powers as an adult.  The crafty old spy needs to work his magic to get his prodigy on the right track.  Starring Michael Douglas as Boris Von Bloom and Dustin Diamond (Screech from Saved by the Bell) as Charlie…the son who has developed the ability to replicate everyday objects and goes by the name “Two Sigma”.

As I read those last two sentences back out loud, I think I’ll stick with the ed tech angle.

First off, a quick primer/recap on Bloom’s Two Sigma:

  • Research published in 1984 by Benjamin Bloom based on work done by his doctoral students (Joanne Anania and Arthur Burke)
  • The experiment was done multiple times in two subject areas and with students in grades 4, 5, and 8
  • Over a 3-week period, students were randomly assigned to one of these three groups
    • Conventional: 30 students in a class are taught the subject matter by a teacher; students are given periodic tests
    • Mastery Learning: Same as Conventional, but with additional formative assessments and feedback to determine the extent to which students have mastered the topic
    • Tutoring: Groups of 1-3 students are taught by a tutor over the 3-week period and given the same assessments as the Mastery Learning group
  • The difference in final achievement measures between the groups was significant:
    • The average Mastery student scored higher than 84% of the Conventional students (this is a 1 Sigma or one standard deviation improvement)
    • The average Tutored student scored higher than 98% of the Conventional students (this is the 2 Sigma improvement from the title)

Given this experiment, I’ll leave it to Dr. Bloom to summarize the 2 Sigma problem:

“The tutoring process demonstrates that most of the students have the potential to reach this high level of learning.  I believe an important task of research and instruction is to seek ways of accomplishing this under more practical and realistic conditions than the one-to-one tutoring, which is too costly for most societies to bear on a large scale.  This is the “2 sigma” problem.  Can researchers and teachers devise teaching-learning conditions that will enable the majority of students under group instruction to attain levels of achievement that can at present be reached only under good tutoring conditions?”

Take a moment to read that again.  This is the mantra of much of modern day ed tech…can we get the same outcomes of 1:1 tutoring without incurring the cost of 1:1 tutoring?  Analytics, personalized learning, adaptive learning, AI, machine learning…amirite?

So here’s where the bastardization comes in to play.  I think that there has been so much focus on the tutoring part, that we’ve lost sight of the learning part.  “Will it scale?” is arguably the most important question that an ed tech investor will ask.  That’s great, and that’s an absolutely justifiable question.  But if you read through Bloom’s paper, there are two parts to the question.  Will it scale, AND, will it improve learning levels over the conventional baseline.  Like with many “shiny object” technologies, we tend to focus on the scale part and gloss over the improvement part (or worse yet, just “assume” the learning will happen).

The biggest problem I see here is that nobody is 100% sure that a scalable 2 sigma improvement is even possible.  The researchers involved (Bloom, and Herbert Walberg) realized that there are smaller incremental gains that can be achieved from other interventions.  To inappropriately utilize a sports analogy, it’s easier to hit a bunch of singles as opposed to swinging for a home run in each at bat.  So now you have ed tech companies (and their corresponding funding) chasing after the elusive 2 sigma brass ring that may not even exist.  What we do see a fair amount of is “semi-scalable” solutions.  These are learning tools that work, but they only work in a specific subject/domain (think of Khan Academy in the mathematics domain).  It’s often not simple to take that “platform” and substitute “math” for “18th Century European History”.

Take this thought experiment as a way to put all of that ed tech spending into perspective: let’s say we took all of the people, time, and money that has been invested into ed tech startups chasing after this scale issue.  Instead of funding those companies, turn those people, hours, and dollars into second teachers and additional resources in K-12 schools (yes…I know there’s a logistical and a distribution challenge…it’s just a thought experiment).  I’d posit that this redistribution of resources would have a MUCH greater impact on learning and student success as compared to the ed tech investments.

My final point is that this isn’t an absolute condemnation of the space.  There are some phenomenal researchers and technicians who are developing processes, tools, and approaches that have a measurable positive impact on student success.  What I’m wary of is the hype and the robot tutors that focus solely on the scale at the expense of the efficacy.  Learning is hard.  Measuring learning is really hard.  Proving that you’ve built a better mousetrap is really really hard.  I’ll just sit here and wait for ed tech companies to focus more on learning/success than on funding/multiples.  After all, there’s a better probability of that happening than of Michael Douglas starring in a buddy movie with Screech.

7 Comments

      • Beth Aguiar

        This is a great post, Mike. I heard Richard Garrett (CHLOE) speak a couple of years ago about the two-factor problem relative to online education. As far as adaptive learning is concerned, the University of Central Florida ran some successful pilots using RealizeIT for their introductory psychology and nursing education courses. But other than that, I haven’t heard of similar success stories save for those related to math (as you point out). Good luck with your latest venture!

        • admin

          Thanks for the comment and the well wishes, Beth. The UCF folks are in that small group of pioneers who have been doing innovative things with data for years…they are good people. I’m fortunate to have worked with them and others in the higher ed learning analytics field for the last 10 years. Hope you’re well!

  • Kristýna Šmídová

    It feels like the ed tech companies are focusing on the unimportant part of the 2-sigma problem. I am not sure, if the main effect lies in the fact, that the tutor can modify the teaching in response to the student’s performance. What if the key is the simple therapeutic effect of the tutor being there for the student? I think, that connecting people to such 1-1 relations is the key. Why for example couldn’t a 8th grader be a tutor for a 3rd grader? Connecting people in such a manner could be realistic and may help a lot! 🙂 What if we focused on systematical support of something like that?

  • we need data on both scales for the efficient frontier of Scale/Cost and improvement
    why would we worry about mythical brass rings when go just go find out and plot the data

    you just find out what each intervention CAN do on its Scaleability/Cost curve and then treat it as a programmatic option and then manage the portfolios of possible returns to maximize expected returns.

    this is not a new problem, and not even a headscratcher

Leave a Reply to Beth Aguiar Cancel reply

Your email address will not be published. Required fields are marked *