Andrew Coulson, Chief Data Science Officer of MIND Research Institute, on leveraging the right data (not just big data)
We recently released an insight brief on what we’re learning from three years of ed tech investing. In the brief, we identified three consistent patterns we observed across all of our ed tech investments.
This three-part blog series is intended to give readers an in depth look at how ed tech entrepreneurs are tackling the most challenging opportunities in the field today, and how entrepreneurs can position themselves on the path to success, too.
NewSchools: Before we get into MIND’s approach to data analytics, what’s your story?
Andrew: I was a STEM professional for 17 years, but went back into education to solve a problem that many people, including myself, experienced: scoring well in school did not necessarily mean you understood the content well. I originally went to a family foundation looking for great programs, and instead, found a small non-profit called MIND Research Institute. At the time the company had just 12 schools and 1000 students doing a second-grade math program based on visual neuroscience. The visual approach had just what I was looking for. I’ve been with MIND ever since, and it has grown to serve more than 1 million K-8 students.
NewSchools: What makes the story of research and development into this one product – ST Math – valuable to a general audience?
Andrew: The program was created to solve a gap: math was too easily experienced as memorizing procedures. ST Math begins visually, and then gradually introduces traditional symbols and language as students master mathematical concepts. Our visual instructional approach leverages the brain’s innate spatial-temporal reasoning ability to solve mathematical problems. What makes our experience generalizable is that the visual approach adds value to just about any learning environment and we believe it is worth investing in more research!
NewSchools: Is this kind of approach helpful for specific student groups like English Language Learners (ELL)?
Andrew: Yes, the spatial-temporal approach is based on a neuroscience foundation: all of us are wired to do lots more visual reasoning than is typically tapped into in conventional instruction. If we remove language as a barrier, it is a great benefit to students learning English. But the reason we do it is not for an ELL gap, but for a general gap. Why put the complication of abstract symbols, math vocabulary, or even heavy language dependency into introducing math concepts if you don’t have to? We see this same approach that minimizes demand on working memory help with special education students. At the same time, the visualization can be made into very challenging puzzles that cause productive struggle for students who may need more than they’ve been getting from just memorizing math.
NewSchools: Your visual instructional ST Math program has been developed over a long time. What are some of the ways you used data to inform product development?
Andrew: We started small – 18 students in one second-grade classroom in South Los Angeles, with 14 math games only involving fractions and proportions (very visual). One of the things we learned early on was that we needed to have some built-in quizzes to get a read on whether a new game was any good. We found that some of our newest ideas for games actually had negative results on our internal quizzes. This helped us learn what did or didn’t work. Another pretty simple use of data was to find places where we made the puzzles too hard, too quickly. These were places where students got stuck.
NewSchools: Any other surprising results?
Andrew: Yes! We found with our math puzzles that game-entertainment features we used to have were distracting students from making progress. When we got rid of them, students stayed engaged with the puzzles and fewer got slowed down or stuck.
NewSchools: Since you’re now serving more than 1 million students and more than 50,000 teachers, what do you say about Big Data?
Andrew: We’ve gotten to where we are with what I would call pretty small data: a simple logic model and metrics like logins, minutes and amount of content. These annual summary metrics got us results at research Tiers 1, 2 and 3 under the Every Student Succeeds Act, and allowed us to do district-requested efficacy studies. The truly big data (billions of records) is at the student/session/item layer with time frames measured in seconds. You can still look at simple metrics, but we are simply too small to tackle that much data ourselves. So we have ongoing partnerships with ACT, NCSU/NSF, Digital Promise, and Harvard GSE. For example, we’re currently involved in a project with Digital Promise around aspects of social emotional learning. We’re working with them to examine aspects of perseverance – how to measure, does it change over time and how do we personalize puzzle difficulty or teacher alerts to match individual students’ needs.
NewSchools: What’s your biggest area of improvement right now?
Andrew: To get a result districts need to commit to serious use of a program. This is not easy to do, and lots of districts have a challenge meeting the requisite requirements, such as minutes per week. Districts should ask their content providers for a minimum usage recommendation and then determine how they can exceed it, or if not get help in customizing an implementation.
Visit the MIND Research Institute’s website to learn more about ST Math, their visual approach, and how they’re using the right data, not just big data, to drive positive student outcomes.