Walk, Watch, Learn: On-the-go video learning

20 April 2022
SHARE THIS ARTICLE

As COVID crept across the world, confining people to their homes and chaining them to their desks — for work, school, and play — Zhao Shengdong was no exception. Involved in class after online class, the associate professor at NUS Computing and his PhD student Ashwin Ram soon began to wonder: What can we do to enhance the online learning experience? Instead of a static setting, could people learn dynamically on-the-go instead? 

“Imagine I want to watch a tutorial on cooking while in the kitchen or look at a yoga video while practicing yoga,” says Zhao, who heads the Human Computer Interaction (HCI) Lab at NUS.

“If we think about it in a broader context, we’re now in this age of ubiquitous computing, where we can receive information anytime, anywhere,” he continues. “So we shouldn’t be restricted to just static information.”

“But now the question is: can people effectively receive information on the go? There is definitely a need out there, but is it possible to do that?” asks Zhao.

If you have ever sent a text while walking down the street, you will know how tricky this can be. Not only do you have to keep one eye on the screen, you also have to deftly avoid fellow pedestrians and, heaven forbid, incoming vehicles. Now throw in having to watch a video and actually absorb and retain information from it? That’s close to impossible without crashing.

But what if we could improve the viewing mode (to use a pair of smart glasses instead of a smartphone) and change the way the videos are presented? Would that allow people to learn while on the go? Zhao and Ram set off to find out.

Small steps to start

The pair first conducted a small pilot study with a handful of volunteers. They fitted them with smart glasses and asked them to walk along a straight corridor while watching various educational videos.

Each video comprised of a different presentation style, and Zhao and Ram were interested to see which elements enhanced the viewers’ learning experience, and which ones diminished it. For instance, did animations help them learn better, or were slides and digital blackboards more useful? Was having a talking head engaging or distracting? And what about the way the text was displayed on screen?

“The first lesson we learnt from the pilot study was that this idea can actually work — that users can actually learn effectively while walking,” says Zhao, who admits that they had “some doubts” in the beginning.

The second learning point was just as clear, and provided further direction for their work ahead: “We learnt that existing videos need to be adapted and completely redesigned to cater for this particular style of watching videos on the go,” he says.

The main issues with existing videos, Zhao and Ram discovered, were that they were often too densely packed with information for someone to absorb while walking, and that the videos had opaque backgrounds which obscured the smart glasses user from viewing his surroundings.

Videos revamped

Armed with this newfound insight, the pair then delved into the literature, searching for ways to best present information for on-the-go video learning. “We wanted to know what are some good practices that can help guide people’s attention and make the content easier to view,” explains Zhao.

The researchers also were keen to find out more about how people viewed on-the-go learning, and the difficulties they faced while watching videos in this manner. So they conducted face-to-face interviews with 16 volunteers and issued a separate online survey to 80 participants.

From their research, Zhao and Ram identified three presentation techniques that seemed most promising for improving video learning. The first was to vary the colour, orientation, or brightness of the on-screen information (highlighting). The second was to display the information in a progressive way (sequentiality), and lastly, to ensure it remained until the end of the video (data persistence).

To see how these techniques impacted learning, Zhao and Ram modified a number of medical videos to incorporate these specific elements. They then recruited 16 participants to view these videos on smart glasses while walking up and down a 54-metre long corridor.

The highlighting technique, they found, wasn’t very useful. But presenting the information sequentially helped strengthen the participants’ immediate recall skills by 36%. When that was combined with data persistence, the figure increased to 56%. “What’s more, all these performance gains were achieved with little or no disruption to the walking speed,” says Zhao, whose findings were published in this paper in March.

Encouraged, the researchers then worked to develop and refine a video presentation style specifically catered for on-the-go learning using smart glasses. Videos created in the Layered Serial Visual Presentation (LSVP) style, as they christened it, would always comprise five key elements: information would appear sequentially on screen and remain until the end of the video; they would be bite-sized and not too dense; they would always be presented against a transparent background; and the content colour would be adjusted to ensure clarity over the environmental background.

Seamless switching

As a final test, Zhao and Shen wanted to see how their LSVP, smart glasses-based approach to learning via videos would compare against the more traditional smartphone-based method. They recruited another 16 participants and tasked them with walking on three different paths — one straight, one curved figure-8 littered with obstacles, and another where they had to navigate twists and turns depending on the signage along the way.

While doing so, the participants watched LSVP videos via smart glasses, and later repeated the experiment while watching normal videos on their phones. They viewed a total of 12 different videos and were quizzed on their content immediately after the experiment, as well as seven days later.

“Overall, users were able to learn better with the smart glasses than on their smartphones,” says Zhao. “Not only did they learn better, they actually walked better too — up to 5.6% faster — which was surprising.” This was especially true when it came to navigating the more complex paths.

Participants had better recall when they learnt via the LSVP videos on the smart glasses compared with the smartphone videos. This was true when they were quizzed immediately after the test, and also seven days later, when recall was still 17% higher.

Zhao believes the better learning outcomes observed with his LSVP videos could be linked to how it allows users to transition more smoothly between the content being displayed and their surroundings. “It supports the interleaving learning theory, which states that people memorise better when they switch between different things or ideas, rather than staring at something continuously.”

“Switching your attention is more effortless with the smart glasses,” he says. “Because with mobile phones, the user has to constantly look up and down, back and forth between the phone and the environment.”

As a next step, Zhao and Ram are now building a tool that will make the process of converting videos into LSVP style much easier. When a video is uploaded into the programme, the tool will analyse and segment it into different elements. It will then automatically convert some of these elements into LSVP style (for example, make the background transparent) while providing the user with easy options to make other modifications.

“Currently, it takes a few hours to convert a video that’s only a few minutes long into LSVP,” says Zhao. “But with this tool, we hope to reduce that time down to one-fifth or even one-tenth. This will help create more videos so that they can be better viewed in on-the-go scenarios using smart glasses.”

Paper: LSVP: Towards Effective On-the-go Video Learning Using Optical Head-Mounted Displays

Trending Posts