Boosting creativity in the crowd with deep learning

4 June 2021
Associate Professor
Computer Science
SHARE THIS ARTICLE

How can you get your next great idea? One way is to ask other people, and many of them, even a crowd. Crowdsourcing — harnessing the wisdom of the crowd to attain a common goal — is used for an impressive array of tasks, from learning how to eat sustainably, to redesigning cities with open government, creating apps with hackathons, and annotating data for machine learning. When you need help in such instances, you are almost guaranteed to find a ready army of volunteers online.

“It’s about using the power of human creativity and brainstorming at scale,” says Brian Lim, an assistant professor at NUS Computing whose work partly focuses on machine learning. “You can get lots of people from diverse backgrounds, and sometimes up to a few hundred responses in a matter of hours.”

Online platforms, such as Amazon Mechanical Turk, make it even easier to gather ideas. But while crowdsourcing for ideas can yield the benefits of scale, diversity, and speed, it suffers from a serious problem: people may offer up the same ideas. “When you’re in a face-to-face meeting, you can hear what each person is suggesting,” says Lim. “But if you’re just submitting a response online and you don’t see what other people are writing, then you might write the same thing.”

“Because of that, you have this redundancy,” he says. “It’s wasteful and it’s inefficient.”

There are some ways to circumvent this but none of them are scalable. For instance, an organiser can collate all ideas submitted for contributors to examine before sending in their own ideas. “But if there are 1,000 things to check, contributors will quit and say ‘it’s not worth my time reading 1,000 things to write one message,’” says Lim.

Instead of burdening contributors, another way is to have the organiser monitor all existing ideas, then send updated prompts to new contributors so they can write something different. “But that’s incredibly manual and will be limited by the organiser’s creativity to generate the prompts, which ironically is the original purpose for crowdsourcing,” he says.

Automatic prompting with deep learning

Keen to find a technological solution, Lim realised that computer models of natural language with deep learning could help automate the prompt selection. Lim and his team — comprising fellow NUS Computing professor Christian von der Weth, research fellow Yunlong Wang, and PhD students Samuel Cox and Ashraf Abdul — came up with an algorithm called Directed Diversity.

The aim of the algorithm is to select diverse prompts to direct contributors “to collectively generate more creative ideas”. This happens in three steps. The first involves finding relevant sources, such as online documents and discussion forums, and scraping them to extract useful phrases.

For example, if an organiser wanted to find ideas to motivate physical activity and fitness, he would select fitness blogs and online forums from which to extract phrases. Directed Diversity would extract phrases like “yoga really taking off”, “capris or sweats”, “handstand push-ups”, “on the road to diabetes”, etc.

Once the phrases have been obtained, the second step is then to embed them as numbers into a vector space using the Universal Sentence Encoder, a deep learning language model. “Each phrase takes a specific position coordinates in this vector space,” explains Lim. “If two phrases or ideas are similar, then they will be close by. If they are different, then they will be far apart.”

For instance, in the fitness example above, “yoga really taking off” and “capris or sweats” will be close together, but far from “handstand push-ups” and “on the road to diabetes”.

In the third step, Directed Diversity calculates distances between phrases and applies a diversity maximisation algorithm to select phrases farthest apart from one another. “We thus select the most diverse phrases,” says Lim. “In turn, we hypothesise that these diverse prompts will stimulate more diverse ideas from contributors.”

Diverse sources and products

To test how well Directed Diversity worked, Lim’s team proposed the Diversity Prompting Evaluation Framework and conducted several simulation and user studies with 540 participants from the crowdsourcing marketplace Amazon Mechanical Turk.

From the experiments, his team showed that Directed Diversity improved prompt diversity, leading to improved creativity from contributors with more diverse ideas and better perceived ratings by third-party raters, and less redundancy. “Plus, we found that the quality of messages wasn’t sacrificed in the process,” he says.

There was a trade-off, however: while message diversity increased, so did user effort. “There’s a bit of sacrifice involved,” admits Lim, “but it makes sense as the diverse prompts challenged people to think of less common concepts, to explore the paths less travelled.”

The team presented their findings at the recent CHI 2021 conference, the flagship conference on Human-Computer Interaction in May.

Moving forward, his team is working on improving effectiveness of Directed Diversity. In particular, Lim and his team are employing Explainable AI to provide more interpretable and persuasive hints.

“Ultimately, we hope to develop more useful and usable applications of AI to strengthen human-AI teams to scale human innovation and creativity,” he says.

Paper: Directed Diversity: Leveraging Language Embedding Distances for Collective Creativity in Crowd Ideation

Trending Posts