Performance prediction is all the rage in higher education.
Testing agencies, researchers, and software vendors all promise to “predict” student outcomes. Some find this prospect alarming, imagining that predictions might determine a student’s fate, or somehow restrict their potential. It’s all a little less scary if we take a step back and think about what it is we’re really doing – learning from experience, understanding the past. Educators don’t learn about the past to predict the future; we learn about the past to change the future.
There are two ways to predict the future. One way is with theory. Given a model for how the world works, you can take what you know about it now and deduce what will happen in the future. Absent defensible theory, experience is our only guide. Assuming that what happened in the past is likely to recur in the future is simple induction, and it’s usually your best bet. Wonderful research1 by people like Philip Tetlock and Daniel Kahneman2 shows that even ‘expert’ human predictors are less accurate than simple extrapolation of this kind.
To predict the grade a student, call her Amy, might receive in a class, we examine how students like her have done in the past. What sorts of difference matter? Again, only experience can help us decide. Imagine we do this really well, identifying many students from the past who are just like Amy across all characteristics we know about. Other things being equal, records of what happened to students like Amy in the past would be our very best predictor of Amy’s performance in the future. But of course we don’t have to let other things be equal.
Learning what’s happened to students like Amy in the past might inspire us to change what we do. Discovering that students who take chemistry lab and lecture concurrently have done better in the past3 might inspire us to encourage all students to do this in the future – changing their outcomes – in effect breaking our ‘predictive model’. Faculty and staff at Universities like Michigan have always used experience to inform their teaching and advice. Learning analytics4 is helping us learn from the experience of all students, instead of just the few we know well.
Perhaps more important, sharing information about the past with Amy might inspire her to change what she does in the class. Learning that successful students regularly spent 12 hours a week on homework, taught their study group members what they’d learned, and began preparing for each exam ten days in advance might inspire Amy to change her own approach to the class. Using tools like ECoach, UM’s Digital Innovation Greenhouse5 aims to put this kind of information in her hands.
When Amy learns from the past, she can change her future.
So let’s stop talking about predicting the future. What we’re really doing is learning from experience, because that’s the only reliable way we can hope to change the future.
(Originally posted on the UM Academic Innovation blog in 2014)
1Tetlock, Philip. Expert political judgment: How good is it? How can we know?. Princeton University Press, 2005.
2Kahneman, Daniel. Thinking, fast and slow. Macmillan, 2011.
3Matz, Rebecca L., et al. “Concurrent enrollment in lecture and laboratory enhances student performance and retention.” Journal of Research in Science Teaching 49.5 (2012): 659-682.