There’s power in hierarchy — but not what you expect

27 August 2019
Associate Professor
Information Systems and Analytics
SHARE THIS ARTICLE

These days, it seems that whenever you’re thirsty and in need of a quick caffeine pick-me-up, there’s always a Starbucks close by — whether you’re running errands locally in the Singapore heartlands of Bedok, or climbing the Badaling section of the Great Wall of China. Starbucks’ ubiquity isn’t just a figment of your imagination, it’s a fact backed by the firm’s latest sales figures.

At the end of July, Starbucks CEO Kevin Johnson announced that sales at their U.S. and Chinese stores performed so well that the company would be raising its full-year earnings and revenue forecast. The international coffee chain now expects earnings to hit 78 cents a share and revenue to exceed $6.82 billion by the year’s end — up from 72 cents and $6.67 billion respectively.

Forecasting earnings is an important part of doing business. It helps companies make decisions about where to take the firm in the future, informs creditors about the risk involved in loans, and indicates where investors should park their money.

“It’s an important question to consider and can often be a billion-dollar problem,” says Ke-Wei Huang, an associate professor at the NUS School of Computing who studies data mining and the economics of information systems. Analysts adopt various approaches to forecasting earnings, with some predicting the value directly. But with earnings comprising so many component variables, projection can be a tricky task. Instead, Huang argues for a bottom-up approach to forecasting.

“Look for hierarchical relationships,” he says. See if the variable in question, which he calls the “focal variable,” can be broken down into component variables. Company earnings, for example, can be calculated by subtracting the total cost from total revenue. In turn, the two component variables can be further decomposed into their respective sub-components — gross profit and cost of goods sold for total revenue; operating and non-operating costs for total costs.

This decomposition — the process of breaking down the focal variable into its component variables — can help make earning predictions more accurate. The challenge, however, is deciding which combination of component variables to use.

“The number of potential ways to decompose is very large,” says Huang. With six component variables — a number that is “very, very small” in the real world, he says, will give you more than 500 possible solutions. Nearly double that figure to ten variables and you’ll have close to 120,000 solutions. Increase that further to 12 variables and you’ll have roughly 4.2 million possible combinations to consider.

“The number of possible combination schemes can be astronomical even for relatively modest data sets,” writes Huang and his PhD student Mengke Qiao in a paper published last December. Because this complexity is still too large for computers to process efficiently, it is crucial to identify which decompositions, or combinations of component variables, will help enhance prediction accuracy.

With a little help from Darwin

Huang and Qiao decided to tackle this challenge two to three years ago. To search for optimal decompositions of the focal variable, they turned to a technique called “genetic algorithm.” Pioneered by John Holland, a professor of electrical engineering and computer science at the University of Michigan, Ann Arbor, in the early 1970s, the method gained popularity nearly two decades later.

Genetic algorithm, as its name suggests, is inspired by Charles Darwin’s theory of natural selection. Borrowing from the concept that drives biological evolution and adapting it to computer science, Holland postulated a method that takes a group of candidate solutions and “evolves” it towards an optimal solution through successive iterations of eliminating weaker candidates.

Applying it to their forecasting problem, Huang and Qiao began by using prediction models to randomly generate a handful of possible component combinations — what they term as solutions. “Once we create, for example, 10 solutions, we would then evaluate their performance and keep only the top two to three solutions,” explains Huang. “We then create the next generation of potential solutions based on those good ones, evaluate their performance again, and so on. So it’s evolutionary.”

New solutions are commonly generated using bio-inspired processes. For example, the original solution might be tweaked slightly, or two good solutions might be merged to create a new one — called mutation and crossover events, respectively.

“The genetic algorithm model is very different and interesting from other methods,” says Huang. “It’s very good at trying to find complicated solutions.”

The team then tested their findings using real-world data. They took the top-performing decompositions obtained and combined them using a stacking method called Long Short-Term Memory (LTSM). They then applied it to predict the earnings of close to 600 U.S. companies, whose data was publically listed on the Compustat North America database.

The tests showed that Huang’s techniques improved prediction performance over traditional forecasting models, called autoregressive models, which predict the focal variable directly. It also outperformed two “state-of-the-art” data mining algorithms XGboost and Random Forest.

Still, Huang strives to improve the technique. “The main weakness is that the solutions we find are good, but they may not be the best,” he says. And because the technique involves studying component variables, it takes longer than analysing the focal variable on its own.

“Because our method is much slower than predicting directly, the gain should be larger,” says Huang. “Otherwise the benefit doesn’t justify the cost.”

Moving forward, the team plans to conduct tests using a dataset with more component variables (so far, they’ve tested a maximum of six variables). They’re also looking to use other datasets that can be decomposed. They’re particularly interested in looking at predicting Gross Domestic Product, which can be decomposed into numerous component variables, such as consumption, investment, government spending, import and export.

“Our method has improved the prediction performance so that’s why it’s an important problem to study,” says Huang. “But we still want to improve it further.”

Paper:
Hierarchical Accounting Variables Forecasting by Deep Learning Methods

Trending Posts