At InstructureCon 2014 I shared some of our thinking behind a couple of new Canvas tools that aim to make face-to-face teaching easier and more effective: Polls (to address conceptual understanding) and MagicMarker (to address skills, attitudes, and more). In the coming months I’ll be talking more about the central hypothesis of “lossiness” that drove the development of these tools for the company, but here I intend to diverge a bit and explore this idea from alternative angles.
The assumptions we began with are…
- Education is based on feedback loops, e.g. between student and teacher
- Feedback loops are “lossy” — lost information can compromise quality
- “Lossiness” is especially prevalent in face-to-face learning environments
- Decreasing lossiness can improve instruction and learning outcomes
The term “lossy” comes from data storage (and, originally, electrical engineering); you may be most familiar with the experience of lossiness in digital media such as JPGs and MP3s: These both sacrifice information for expedience (i.e. less information = smaller file size = faster download times, more albums on your device).
Most people can’t detect quality differences in MP3s over 128kbps, and many JPGs with significant lossiness are “good enough”.
Which raises the question, can education be lossy and still be effective?
Certainly. It has been for a long time, depending on how you want to measure it. But that doesn’t mean we should automatically accept lossiness — even when it seems innate to our current teaching and learning practices? We should not — at least, not without first identifying the value of different information and understanding the impact that it’s loss or presence may have on learning outcomes. A couple of really obvious examples:
- Online learning often sacrifices spoken language, physical presence, and human expressions. How does that impact learning outcomes? For some outcomes the impact will be negligible; in others it will be significant.
- Face-to-face learning often sacrifices full class participation — let alone meaningful tracking of participation — for the sake of time. How does that impact learning outcomes?
And even when you look beyond content and communication we find some assessment methods are lossier than others. Some assessment instruments fail to capture or accurately measure the most important information about learning. The goal of educative assessment and formative feedback is essentially to stop wasting the valuable information gathered through assessment by providing to learners in a way that informs and encourages improvement of understanding and ability.
As educational data mining is showing us, sometimes just analyzing learner behavior reveals insights into the process of learning that can improve it. But, so far, most learning analytics have been limited to online LMS environments.
The latter gets us into some tricky territory: If lossless learning is our goal, do we try to capture everything that happens in a classroom? We might be acclimating to this in online environments, but will we ever want video cameras recording and analyzing student expressions, postures, gestures, and vocal tones?
Technology is giving us the power to do so. But it’s not the only way forward. I’m personally more interested in using technology to increase voluntary student participation that expresses real learning. Measurements of these expressions can be logged or measured against to learning outcomes, not behavioral predictors. This is especially hard to do F2F, which is why I’m excited to see whether how the new Canvas tools might help.
I also want to explore other interpretations of lossiness, too, for example how student ownership may be lost inside closed systems of education, and how we might build systems that accommodate both institution’s needs for measurement and learners’ needs for independence and autonomy. Or, following Antoine de Saint-Exupéry’s suggestion, how much of our instructional design can we eliminate and still help all learners succeed?