Bringing video games to life

4 September 2020
Senior Lecturer
Computer Science
SHARE THIS ARTICLE

Your heartbeat quickens as you watch your video game avatar run through the twisting corridors of the castle. There is still treasure to be found and a hostage to be rescued, and time is running out. Suddenly, a large shadow looms on the dim candlelit stone walls, followed by a low roar that sounds awfully close. You take a deep breath, clutch your mace a bit tighter, and ready yourself to attack. You swing around the corner, weapon raised, and…

…nothing happens. The game lags. The unseen monster remains a mystery. You slam back into reality and stare at the screen in frustration.

In the world of video games, real-time interactivity is key. When a game doesn’t load quickly enough, it can be a frustrating experience. This is especially true for games with highly realistic visuals, where the graphics heighten immersiveness but are incredibly demanding on a device’s processing power.

One way to generate such lifelike graphics is to use a technique called ‘ray tracing’. Most commonly applied to movies (think Avatar or any Pixar film), ray tracing works by following a beam of light from a set point and examining the effect it has on the objects in its path. Special effects designers use this technique to figure out how, for example, light enters through a window and illuminates dust motes suspended in the air or the way a metal sword glints in the light. The premise is simple: capture the lighting correctly and you will elevate the level of realism in the scene.

“Ray or path tracing simulates the actual light propagation, shooting billions of rays into the scene, bouncing them around, physically simulating the light’s interaction with a surface,” once explained Tony Tamasi from NVIDIA, a California-based company that develops computer graphics processors and chip units. “These techniques can produce great results, but are incredibly computationally expensive.”

Apart from requiring powerful processors, ray tracing comes with another setback: it is incredibly time-consuming. Rendering a single frame can take hours — a luxury video game makers do not have.

“Games have more of an interactive environment compared with movies,” says Anand Bhojan, a senior lecturer at NUS Computing. Players want a seamless experience, and not games that lag. “So video games cannot fully use this ray tracing approach. Even though it is more accurate, it’s just not suitable for real-time environments.”

Instead, video game designers employ a different rendering technique called ‘rasterisation,’ which involves taking objects (represented as meshes of small shapes, usually triangles) and converting them into pixels or dots that can be represented on screen. The process is quick, making it suitable for real-time video game playing, but it cannot capture certain nuances the way ray tracing can, says Bhojan. “Things like shadow, illumination, and reflection.”

The best of both worlds

What if we could combine both rasterisation and ray tracing to bring users the best of both worlds, wondered Bhojan roughly two years ago. The possibility seemed especially feasible, given how recent improvements in hardware now make it possible to carry out ray tracing on less sophisticated — and less expensive — graphics processing units (GPUs).

“I thought we could do hybrid rendering and use ray tracing wherever it is needed to improve rasterisation,” says Bhojan. When applied to a video game, the new rendering system he and his team created would analyse each scene to identify where the illumination, shadowing, reflection, and other lighting effects could be improved. Ray tracing would then be applied to “fix” these areas.

“Based on the user’s computer and its performance, we dynamically adjust how much rays to use and in which phases to use them so that the user doesn’t feel any lag and can get the best possible quality for the hardware he uses,” explains Bhojan.

“Because ray is expensive and requires high-quality software and GPU, we have to be mindful of how much ray we use,” he says. “Hybrid rendering is about trying to create a balance between quality and machine capabilities.”

In particular, the team realised that rasterisation was especially bad at generating realistic backgrounds when it came to creating depth of field or when an object was in motion. But when their new hybrid rendering methods were tested in both these scenarios, it produced images that were better than those generated via rasterisation alone and similar in quality to what ray tracing could do.

“Our aim was to get better visual quality while maintaining interactive frame rates,” says Bhojan, whose students virtually presented the findings for depth of field and motion blur at the annual computer graphics conference SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) in August.

The end goal is to commercialise their new hybrid rendering system within two years. “Essentially, we want our system to be able to run in most hardware and be capable of meeting the needs of a range of users while still being able to produce the best possible quality image,” says Bhojan. “This is so that everyone, whether they have a cheap PC or a more expensive one, can play the game.”

Leveraging machine learning

Another aspect of image creation that Bhojan and his team are exploring is how to use machine learning to enhance the content generation and rendering process. “Can we use it to automatically generate more realistic graphics and integrate it into the rendering pipeline to fix some of the artefacts of rasterisation?” asks Bhojan.

This is especially salient when creating sprawling virtual worlds complete with rivers, roads, and other features. You can begin with open-source maps that are available online and work from there to get more realistic game worlds, he says. Or use existing images to create 3D models of characters and animals for your game. Preliminary work was presented at SIGGRAPH as well.

“With machine learning, we can say: ‘If I have several thousand images as samples, can I automatically produce a 3D model of the content represented in the images to use in the game?’” says Bhojan. “If so, then we can speed up the object modelling process.”

The research is still in the preliminary stages, but if successful will make the rendering process “much faster and more accurate,” he says.

Ultimately, Bhojan hopes his hybrid rendering system and other research in the area will enable video game users to enjoy hyper-realistic games at quicker speeds and lower computational powers. “Maybe then we can finally bring Hollywood realism to interactive games.”

Papers:

Hybrid DoF: Ray-Traced and Post-Processed Hybrid Depth of Field Effect for Real-Time Rendering

Hybrid MBlur: Using Ray Tracing to Solve the Partial Occlusion Artifacts in Real-Time Rendering of Motion Blur Effect

Trending Posts