For those who’ve taken the plunge into the world of wearable devices — 61 million of us by the year’s end, as estimates predict — the leap can be liberating.
Want to go for a run and clock your distance, time, and calories burnt without having to lug your phone with you? Just slap on a fitness tracker and head out the door. Cooking and want to control the music playing on your phone that’s atop a kitchen counter safely away from the food? Just give the Music app on your smartwatch a tap. Want to know who’s texting you while in a dark cinema without getting death stares because of the bright betraying glow of your screen? Take a subtle glance at your wrist instead.
Wearables have changed the way we go about our lives, affording us greater freedom...but it’s freedom on a long leash. “If you’ve ever owned a smartwatch, what you will realise is that you still need to carry your smartphone around because smartwatches are not really capable of doing a lot,” says Tulika Mitra, a Professor of Computer Science at the NUS School of Computing.
Compared with smartphones, wearable devices are more limited in terms of their battery size, storage capacity, and computational power. To use them effectively requires leveraging the strengths of a smartphone, the internet or the Cloud. Which explains why you have to pair your fitness tracker with a phone or laptop after a run to track your long-term progress, why you can’t play your favourite tunes directly from your smartwatch, and why you must have your phone nearby to answer texts or calls from your wrist.
Mitra and her colleagues at NUS Computing, however, believe they can change that and create what she calls next-generation wearables. “We are trying to make people free of bulky computing machinery and seamlessly embed the computation into everyday devices,” she says.
“We’re looking for solutions where the wearable device can do everything for you without you relying on anything else,” says Mitra. The technical term for it is called “Edge Computing,” she says, which refers to how devices can function locally and independently with limited connectivity to the Internet or a network of servers such as the cloud.
It’s hard to be small
What you’re looking for in a wearable device, says Mitra, is two things: high performance at a low power. In other words, you want your Apple iWatch to help you navigate to your destination even if you’re in a Wi-Fi dead spot. Or you want your Google Glass to carry out facial or image recognition in real-time.
The challenge is “how you can actually do this with very little power on a very small device,” says Mitra. Small and unobtrusive is good from the view of a person wearing the device, but terrible news for a programmer working with the tiny batteries this entails.
To meet this challenge, Mitra and Professor Peh Li-Shiuan collaborated together with their team (comprising of PhD students Cheng Tan and Manupa Karunaratne) to develop a special processor chip called Stitch. While most processor chips found in wearable devices today rely largely on software solutions to carry out their various functions, Mitra’s chip depends on a good mix of hardware and software solutions combined.
Software is important as it gives a chip the flexibility to be used across a variety of different applications — whether it’s to play music or to get a weather update. Hardware, on the other hand, tend to be more application-specific — such as only supporting Skype calls — but are needed to really drive the performance of a wearable device.
Most devices today, however, tend to be “mostly software-based with a few hardware accelerators thrown in” because of the way the market has shaped up, says Mitra. Designing and fabricating silicon chips for use in wearables and other electronics is both costly and time-intensive. Buying ready-made chips can ease this “time to market pressure...and alleviate production costs,” she says, but the downside is that such chips are largely software-based and cannot meet either the high performance or the low power needs.
A patchwork quilt
Enter Stitch, a “hardware-software co-designed solution that can bring you performance far closer to a completely hardware-based solution than a completely software-based solution,” says Mitra. Stitch is a processor chip comprising 16 cores put together on a mesh network. Instead of having a huge, complex hardware accelerator within each core, which would bulk up the chip, each core has a tiny accelerator patch but the patches are “stitched” together virtually to create large accelerators.
Part of the ingenuity in Stitch’s design lies in the heterogeneity of its multiple cores. The chip contains patches with different functionality, which allows each patch to function individually and follow its own set of instructions or computational patterns. Imagine the cores are like a team of worker bees or people with different skill sets, says Mitra. Someone might excel at product design, while someone else might be a whiz at marketing.
“Different applications have different needs. So if I have the information about the different patches, I decide which worker bee gets what task and what is the best match,” she says.
The benefits of such a hardware-software co-design is that you get “low-power, high-performance across multiple application domains,” says Mitra. The team also made it a priority to “stay true” to the traditional software model, Message Passing Interface or MPI, that most software developers are familiar with, so that they can easily adopt Stitch. “You can design really wonderful processor architecture but if people find it difficult to use and if the programmers don’t know how to use it, then it fails,” reasons Mitra.
A more dynamic chip
When her team put Stitch to the test in various applications — such as finger gesture recognition, image classification, and pinpointing the user’s location — the chip performed so well that “it blew me over,” she says.
Compared with LOCUS, a homogeneous-core chip they developed in 2014 when the project first began, Stitch’s performance was two times better despite being 7.5 times smaller. “It was a very, very pleasant surprise...We would have been happy to see exactly the same performance as LOCUS because you’re using much less area,” recalls Mitra. “I wouldn’t have expected to have the cake and eat it too.”
Apart from Stitch, the team is developing a different type of chip. The HyCUBE accelerator, as the chip is called, can be used alongside Stitch to help offload some of the latter’s computation, thereby making the whole process even quicker and power efficient.
Mitra is also extremely proud of the fact the women outnumbered men in the HyCUBE team (comprising of research fellow Wang Bo and research assistant Aditi Kulkarni apart from the two PhD students) — a rarity in her years in computer science. “This is the first time in my life that we have group meetings where there are four women and two men. So that has been fun.”
In the coming months, the team want to take Stitch one step further, by making the chip more dynamic. “In some sense, Stitch is still a very static architecture,” she says. If you want to run multiple applications on your smartphone, for example telling Siri to ring your dad while using your finger to swipe through your music playlist, Stitch isn’t capable of running both simultaneously. Instead, you have to tell the chip beforehand which function to do first, where to run it and which patches to fuse together.
“Going forward, we want to make it far more dynamic — run multiple applications at the same time on the chip and let it figure out which cores and patches to use and how many without making everything predetermined,” Mitra says. “That is the dream we have.”
NUS News, 1 April 2019