Home > UDL and Accessibility > Going Multimodal: Notes from the March 17 Webinar

Going Multimodal: Notes from the March 17 Webinar

Concept map of North American trees - ConiferousMany thanks to the good folks who came out for yesterday’s webinar, “Multimodal Strategies for Communication & Expression.” Ann Marie and I appreciated the contributions made, which I’ve incorporated into our notes below.

The content of the webinar was based on a 2008 white paper that was commissioned by Cisco and written by the Metiri Group, titled Multimodal Learning through Media: What the Research Says. I liked this report when it was published and decided to resurrect it as the subject of a webinar because, at just 24 pages (including appendices), it’s a bite size synthesis of the research behind multimodal learning and how it can inform the use of multimedia for instruction. The framework of the paper centers on three key aspects of multimodal learning:

  • The physical functioning of the brain (neuroscience)
  • The implications for learning (cognitive science)
  • What the above means for the use of multimedia

So, we set out to define multimodal learning, to summarize the research behind it and, most enjoyably, demonstrate and provide examples of how it can be accomplished through multimedia applications on the MLTI MacBooks.

We described multimodal learning as learning through multiple senses that are associated with activating different brain responses, including auditory, visual, tactile, olfactory, and gustatory. The idea here, which is supported by research, is that the more modes and contexts through which we experience a fact, a concept, or an application, the more likely we are to retain it. So, this research is good news if this has seemed common sense to you: Conveying information in more than one way increases the likelihood that your students will understand it. And, of course, multimedia in the form of a combination of text, sound, and visuals can help us with this. If you read the white paper, you’ll find that the two sensory channels of our working memory that are associated with multimedia (i.e., verbal/text and visual/spatial) work together to augment understanding.

But simply attending to the dual sensory channels isn’t enough. To truly augment deep learning, the researchers remind us that we need to combine the use of multimedia with what we know about other effective teaching practices, such as those presented in the seminal National Research Council book, How People Learn. The principles that their work is based on are that we need to:

  • Build on students’ background knowledge so that they can make sense of new learning by connecting it to what they already know;
  • Help students develop deep content knowledge by helping them to organize facts, theories, and applications of the discipline into a framework;
  • Teach kids how to think about their own thinking…to independently check-in with themselves to question their understanding and to use their own learning strategies to approach and solve problems. If I explained this well enough, you’ll recognize it as metacognition.

So, leveraging a combination of the sensory channels with best teaching practices is going to most effectively augment students’ capacity to learn.

Multimodal/Multimedia Principles

With that summary of the research presented in the white paper, we moved on to how we can apply it with the MLTI MacBooks. The paper lists a set of eight research-based principles that guide how to best apply modality and multimedia for learning. We selected just a few of those to demonstrate how you might effectively use your MLTI MacBook.

The first is the Multimedia Principle, which simply states:

Retention is improved through words and pictures rather than through words alone.

But don’t go overboard because the Coherence Principle cautions us that too many words, pictures, and sounds are counterproductive to learning.

Specific MLTI applications for which examples of the Multimedia Principle were given included Comic Life, Photo Booth, GarageBand, and iCal.

The strategy of using “sequential art” with Comic Life as a tool resonated with folks for whom comic books have served to support student literacy (and their own when they were emerging readers themselves!).

In addition to capturing photos, Photo Booth was recognized as an assessment tool by teachers who have their students record themselves conducting a performance, such as reading or speaking a second language, and then using the video for conferencing.

GarageBand was described as a diverse multimedia producer because of the ease with which voice and music can be added to the combination of text and visuals. And iCal was lauded for it’s integrated audio features that can be customized for student reminders and alerts.

We then moved on to the Modality Principle, which simply states:

Students learn better from animation and narration than from animation and onscreen text.

I pushed back on this principle in consideration of students who need on-screen text in order to access the content of a video. Students who are deaf or hard-of-hearing rely on closed captioning, which is text of what is being spoken by actors or narrators, as well as any other relevant sounds. English learners can also benefit from closed captions because they convey verbal speech in an additional mode, which can support their acquisition of English.

The good news is that today we have resources that give us choices about how we experience video. These choices are videos that offer closed captioning (which allow the user to turn captions on and off, as opposed to open captions that are always visible) and audio description, which is the addition of a narrator who describes what is happening in the video when there is no dialog or other sound that indicates the action occurring. As closed captioning was originally developed for people with deafness, audio description is designed for people who are blind. Arguably, however, both have implications for multimodal learning.

The Described and Captioned Media Program has a library of videos that are either closed captioned or audio described or both. During the webinar, I provided an example of a closed captioned and audio described video from their library and highlighted the features in the form of inputs that improve the accessibility of the content for all learners. A couple of webinar participants aptly identified this as universal design for learning (UDL).

Finally, we examined the Contiguity Principles:

  • Spatially: Corresponding words and pictures should be presented near each other rather than far apart
  • Temporally: They should be presented simultaneously rather than successively

Two MLTI applications were featured in this section. iPhoto allows you to create pages of images with corresponding text that is displayed immediately below pictures and simultaneously. An example of an advanced organizer for a field trip to Boston’s Freedom Trail is provided in the webinar recording.

The second application featured for contiguity was OmniGraffle, a concept mapping software program. Concept mapping is another research-based strategy and targets the ability of students to organize information (facts, concepts, applications of a content area) into a framework with which they can retain new learning and support recall over time. One webinar participant explained that she uses concept mapping for vocabulary instruction. OmniGraffle allows users to add images to symbols, which extends its usefulness as a concept mapping tool. The example featured in the webinar is of the distinction between coniferous and deciduous trees. Images of each species appear next to their names.

In summary, the good news for us technology integrationists is that research shows that multimedia can be a good teaching tool. We must, however, remember to inform our use by research-based principles. The other good news is that your MLTI MacBook is your partner in executing multimodal learning experiences for your students.

 

 

  1. No comments yet.
  1. No trackbacks yet.