I’ve previously written about sports analytics and parallels to learning analytics in higher ed. So I was obviously excited to see this FiveThirtyEight article — a behind-the-scenes writeup of the data behind player ratings in the Madden NFL video game. With the ‘gamification’ thread that’s picked up steam in higher ed over the last few years, it makes sense to keep an eye out for connections like this. While there are definitely learnings between the gaming space and the learning space, there are also vast gaps that should not be glossed over. Let me point out a few ways that learning is not like the Madden NFL game.
1. It’s REALLY HARD (if not impossible) to simulate a learner
.Let’s unpack this statement by first talking about how Madden simulates players. They rate players on 43 different metrics — it’s a wide variety of characteristics ranging from mainstream things like catching and speed, to obscure measures like trucking and release. The FiveThirtyEight article talks about the difficulty of gathering these metrics for players. For experienced players, past performance plays a huge role. For rookies, it’s much more subjective.
I found it very interesting in that the EA Sports staff (the makers of Madden) are very transparent about their ratings. That’s something that higher ed definitely needs to adopt/maintain. According to the article, the ratings are updated each week based on players’ performance…a sort of manual Bayesian approach. Also, the scores are combined into a single Overall Rating — one number on a scale of 1 to 100. This Overall Rating is the single dimensional metric that game players (and the real NFL players) covet.
When thinking about differences between Madden and learning analytics, there are a few points to make about the ratings. First, the EA team says that the Overall Rating is a weighted average of the 43 metrics, but some of the weights are zero. This feeds into the idea that if you are going to measure something, make sure you know how you plan to use it. Otherwise, the effort expended on measuring would be wasteful. Second, EA chooses to summarize an entire player into a single dimension (Overall Rating). While this is great for simplicity, usability, and game play, it does a disservice to the player. The chart on the right are the actual ratings of the 43 metrics for Larry Fitzgerald. I didn’t see “Nicest Guy in the NFL” or “Best Work Ethic” anywhere on the list. My colleague Simon Buckingham Shum said it nicely in his post about learning and analytics when he talked about the “misuse of blunt, blind analytics — proxy indicators that do not do justice to the complexity of real people, and the rich forms that learning take”.
2. Learning is not a probabilistic roll of the dice
After the ratings are in place, game play comes down to the luck of the draw. I’ll use a simple example from another sports game called Strat-o-matic. I spent countless hours of my childhood playing this game where you simulated baseball games with cards that represented the batter and the pitcher and a roll of the dice determined the outcome of a play. For example, if you wanted Tim Raines (one of the best base stealers of his time) to try and steal second base, you’d look on his player card and it would say “Speed: AAA”. The rules of the game say that an AAA-rated base stealer will be safe if the 20-sided die comes up 1-19 and he will be out if you roll a 20. Needless to say, Tim Raines was safe 95% of the time.
Unfortunately (or fortunately), learning isn’t like that. My ability to understand Macroeconomics when I was in grad school was not the outcome of the roll of the dice. It was an amalgam of my capability to learn, my desire to learn, the difficulty of the material, the environment in which I was learning, externalities…all of which were changing on a day-to-day basis. There are researchers much smarter than I who continue to focus on how this process works. With all due respect to the companies who have mastery products on the market, I’d argue that they are just scratching the surface of learning, and that the most progress they’ve made is in quantitative fields like math where a program can ask many permutations of the same concept in order to gauge some level of mastery.
3. Competition drives Madden, but it doesn’t drive learning
Education is not a positional good. In Fred Hirsch’s 1977 book “Social Limits to Growth“, he describes positional goods where “one person’s increased consumption or use of it reduces its availability for other people’s enjoyment”. So, if I win a game of Madden, that means that someone else cannot win the same game. There’s your foundation for competition, engagement, motivation, and enjoyment.
This is absolutely not the case for learning. My learning of supply chain management in no way detracts from your learning of the same topic. This means that those who blindly use the success of video games as a foundation for gamification in learning might be relying on false logic. It also means that analytics that pit students against each other in class rankings need to be very aware of what behaviors they are inciting (whether they intend to or not).
I think there are many things that ed tech practitioners can learn from video games like Madden. The economic motivation is strong in the video game space so it attracts a lot of smart people. I just hope that the research and experimentation is there to sort out any spurious claims about ‘cracking the code’ to learning. Humans and learning are complex machines, and until the singularity hits in 2045, we still have a lot to learn.