01 June 2020 Department of Computer Science , Faculty , Research , Feature , Media

If you have been to parts of Orchard Road or Bugis Junction, two busy shopping streets in Singapore, you might have noticed something unusual. There, familiar “traffic light men” flash red and green to help guide pedestrians safely across the road. But these are also accompanied by matching LED strips on the ground.

In recent years, Singapore and a handful of other cities, including Sydney, Tel Aviv, and Augsburg, have embedded lights into pavements at busy intersections as an additional safety measure for pedestrians. Ilsan in South Korea has gone even further, employing flickering lights and laser beams at road crossings to warn walkers of the dangers ahead.

Whatever the means, their target is the same: the heads-down tribe of smartphone zombies — people who walk around perpetually glued to their mobile device.

“Unfortunately, it’s a phenomenon we see everywhere now,” says Shengdong Zhao, an associate professor at NUS Computing and head of the NUS-Human Computer Interaction (HCI) Laboratory. Oblivious to the world around them, smartphone zombies are a danger to themselves and others. They are also plagued by neck strains and other health problems as a result of being in an unnatural hunched over position for prolonged periods of time.

Photograph of a mask-wearing woman in a subway station, looking down at her phone while typing.

“Yet it’s difficult to stop people texting and using their phones for information on the go,” says Zhao. “But still we want to get rid of smartphone zombies.”

“In an ideal world, technology would adapt to people’s lifestyles and not the other way round,” he says. “So I wanted to create a technology that liberates humans to do what they want to do, but still benefit from tech.”

Making smart glasses smarter
At present, when we need to text on the go — whether it’s firing off a quick message to your spouse to say you’re on your way home, or to quickly jot down a reminder to pick up sugar at the supermarket — we have to “force ourselves into a very tiny smartphone screen,” says Zhao.

To overcome this, he reasoned, we need to switch to a heads-up technology so that “people don’t bump into others or slow down traffic” as they text. Instead of reinventing the wheel, Zhao decided to take smart glasses, the most advanced heads-up technology available today, and improve their functionality.

His creation: EYEditor, a technology that lets users enter and edit text on smart glasses using their voice and auxiliary inputs from a ring (worn on their finger).

When smart glasses were launched more than six years ago, proponents highlighted one major benefit: users could receive information while still enjoying a seamless integration with their surroundings, even as they’re tethered to a digital device.

But one of the drawbacks to using Google’s Glass, Microsoft’s HoloLens 2 and other such smart glasses is that it is difficult to edit text that appears on screen. To do so, many require you to raise your hand to touch the handle of the smart glasses, which can be “tiresome, inconvenient, and inaccurate” says Zhao.

Selecting the words you wish to edit can also be tricky without a touchpad, he says, referring to what researchers call a “spatial referencing problem.”

EYEditor, which Zhao and his students created last year, makes use of — for the first time — voice inputs to edit text on a smart glass.

Photograph of a hand holding a pair of black half-rimmed glasses.

The power of voice
To create EYEditor, Zhao’s team from the NUS-HCI lab — comprising Debjyoti Ghosh, Pin Sym Foong, Can Liu, Nuwan Janaka, and Vinitha Erusu — paired a smart glass with a ring mouse and a pair of headphones that had integrated microphones. They then built an algorithm that could alter text on the smart glass screen using a technique called re-speaking. “It’s very natural and similar to what we humans normally do when we speak,” says Zhao.

To make a correction, the user simply says what the new desired phrasing is. The algorithm is smart enough to identify which parts of the sentence needs correcting, without the user explicitly referring it. For example, to alter the sentence “The doggy is a domestic person,” the user just has to say “The dog is a domestic animal,” or more simply “dog” followed by “domestic animal.”

A speech recogniser then transcribes this into text and sends it to a processing unit, which alters the text on the smart glass accordingly.

Additionally, users can make use of the ring mouse to make more complicated text changes. The mouse’s trackpad and buttons allow users to insert or delete texts, undo or redo changes, and alter more than a single word at a time. The ring mouse can also be combined with voice inputs, enabling commands such as “delete this.” (You may view this video, which explains how EYEditor works.)

Type and walk on the go
Because no one has previously developed a method for text editing on a smart glass, the research team, led by Zhao and Ghosh, had to first determine what would be the most optimal way for presenting onscreen text to users. Would an audio or visual format work better, or a mix of the two? And how much text should be presented in one go? The team’s initial studies led them to conclude that users were more comfortable when they could see what they had to correct, and that they found it easier to do so when they viewed single sentences rather than a block of text. Additionally, overlapping audio and visual output proved too confusing for users.

Zhao and his team then sought to compare how EYEditor fared against a smartphone for ease of text editing. They recruited 12 volunteers for the study, and measured each participant’s natural walking speed along three 40-metre long paths. The paths varied in difficulty, from one that was simply straight, another that was littered with obstacles, and a third that involved stair climbing.

The team then measured the new walking speed of the participants as they used their smartphone and EYEditor to correct a series of grammatical and semantic text errors while navigating their way around the paths.

From their study, reported in this recently published paper, the team discovered that participants could correct texts significantly faster while maintaining a higher average walking speed when using EYEditor compared with their smartphones.

“We found that EYEditor gives you significant advantages when either the user’s travel path becomes more difficult or the correction task gets harder,” says Zhao. Participants reflected how alternating their attention between the onscreen text and their surroundings felt much easier and more seamless on the new device as compared to when they used their phones.

However, the participants preferred their phones in certain instances, such as when climbing down stairs, because “they are already looking down at their feet anyway,” says Zhao.

“This is a new way of doing text editing on the go – our study has shown that it’s possible and that we’re on the right track,” he says. “But of course, EYEditor requires a lot more testing in the long run.”

His team is now working to improve the re-speaking algorithm, to build more functionality into the ring mouse, and to test it outdoors. In the future, they hope to be able to integrate the system with other smart glass-based apps, such as social media and messaging ones.

Zhao foresees text-editable smart glasses as having many applications, from allowing surgeons to access medical charts and vital sign readings hands-free in the operating theatre, to providing real-time instructions while cooking or assembling furniture.

“It has a lot of potential uses, which I think is a complete game changer,” he says.

  

Paper:
EYEditor: Towards On-the-Go Heads-Up Text Editing Using Voice and Manual Input