6 Interesting Applications of Graph Neural Networks

October 5, 2021 - Emily Newton

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.

Machine learning is growing at an impressive pace. One of the newer advancements in the field concerns graph neural networks (GNNs). They make inferences about information plotted on graphs. Although graph neural networks are still in the early stages, there are already some fascinating ways to apply them. How do machine learning researchers use graph neural network models? Check out these real-life examples of what’s possible.

1. Improving Travel Time Predictions

Many people use tools like Google Maps to get time estimates for upcoming journeys, whether heading to catch flights, trying to reach new restaurants on time to meet friends for dinner or calculating their morning commutes. 

However, factors like traffic levels and road work can throw off those predictions. Teams at Google used graph neural networks to decrease the likelihood of those inaccurate forecasts. They started by creating “supersegments” comprised of parts of adjacent roads sharing significant traffic flows.

GNNs came into the picture by treating local road networks as graphs. Each route segment is a node, and the edge points are consecutive segments or places where roads connect at intersections. The graph neural networks allowed predicting traffic on roads ahead of or behind a driver, as well as the number of surrounding cars on adjacent and intersecting roads. 

The researchers could use the same graph neural network model whether a supersegment had hundreds of nodes or only two. Their work allowed accuracy improvements of up to 50% when estimating arrival times for travel in cities ranging from New York to Berlin. 

2. Enhancing Shopper Recommendations at E-Commerce Stores

Many online platforms and stores recommend content likely to interest a person who’s browsing. The idea is to keep them engaged as long as possible, increasing the chances they’ll make purchases or take other desirable actions. 

Developers tasked with making a recommendation engine for a sporting goods store called Decathlon Canada used graph neural networks to help with the job. They decided GNNs might be the best way to represent the available information, such as the details of a user and what kinds of sports equipment they’d purchased in the past. 

The team used 50 weeks of data to train the GNN model over 14 days. The results indicatedshoppers were more likely to interact with store results shown to them with the recommendation engine. 

The store’s previous approach was to recommend the top-selling products of the past two weeks to all customers. However, the graph neural networks showed more relevant items based on information associated with the individual user looking at a webpage and their past activities at the site. Thus, customers were more likely to click on the suggested products. 

The recommendations also extended to email, provided a customer was on the site’s mailing list. The group that built the GNN model planned to eventually expand the overall amount of data used. They believed increasing the overall amount of information could give the graph a richer structure, making it even more useful. 

3. Helping Autonomous Cars Make Better Decisions

Autonomous cars are not widely seen in all areas yet, but people are still excited about how they could change society for the better. For example, driverless vehicles brought medical supplies to doctors working in temporary facilities during the COVID-19 pandemic.

The companies working to develop autonomous vehicles believe they could make the roads safer for everyone. After all, driverless cars don’t get fatigued or distracted, and they eliminate the dangers of driving under the influence. Waymo relied on graph neural networks when improving its autonomous driving platform, known as Waymo Driver. 

The GNN gathered information about vector relationships. Examples were when a car approached an intersection or a pedestrian got close to a crosswalk. The graph neural network could use that information to predict how other cars on the road or people on foot would react. It was then easier for the vehicle to make appropriate decisions when navigating through traffic. 

More specifically, the autonomous car used the data to provide valuable context clues about the nearby environment. Humans naturally do that by relying on their experience. They can guess the likelihood of something happening, even if it does not play out in real life. An autonomous car does not have that background to draw upon when operating, though. 

In this instance, a car or a person in the Waymo Driver’s environment was an “agent,” while the surroundings were the “scene.” Researchers compared the GNN results with those achieved by another type of neural network. They found that the graph neural network performed 18% better when there were 50 agents per scene. 

4. Reducing Biased Social Media Suggestions

When someone uses their social media profile, they’ll often see suggestions about other pages or people to follow. Graph neural networks operate in the background, making that happen. However, one known shortcoming is that they often use sensitive characteristics, such as a person’s gender or race, to make those determinations. 

Penn State researchers built a new graph neural network that minimizes those potential sources of bias. They created it to operate differently by working primarily on non-sensitive details about an individual. 

The researchers trained their model on two collections of real-life data from social media users in Slovakia and basketball players affiliated with the National Basketball Association (NBA). Lab experiments showed that the model could still accurately classify people even when it does not have access to as much data that could generate bias. 

The Penn State team said their work could prove useful in other GNN applications where the goal is to limit bias. Those could include screening job applicants, reviewing credit applications or detecting crime. 

5. Furthering Precision Medicine

When people take prescribed medicine to treat an illness, they and their providers anticipate positive results. However, factors specific to one patient may cause severe or even life-threatening side effects, although most people who take the same drug get good outcomes. 

Such variables make people hopeful about possibilities in precision medicine. Advancements could lead to developing a treatment for one patient rather than millions. Precision medicine can also apply to lifestyle changes. One study predicted which patients with depression would see the best outcomes from exercise regimens, for example. 

Work is also underway with using GNN applications to see which molecules are strong candidates for future drugs. A group developed and tested two methods of using GNNs to put molecules together with appropriate chemical equations. 

One of those methods gleaned the structural patterns of more than 7,000 molecules that scientists tapped as viral protease inhibitors. The other virtually constructed molecules while simultaneously optimizing the desirable properties for future drugs. This approach revealed some previously unknown molecules. 

The researchers said each method lends itself well to new drug discovery methods. These uses of GNNs could enable repurposing existing drugs to treat new ailments or aiding in the arduous search for new antiviral medications. 

6. Increasing Robots’ Tactile and Object Manipulation Capabilities 

Robots are getting tremendously more advanced, but room for improvement remains in how the machines use tactile sensors to interact with objects. A team of MIT researchers created graph neural networks to help. 

They said this achievement could make industrial robots more precise when performing touch-based tasks. It could even lead to enjoyable uses for personal robots, such as having them entertain kids by molding clay into recognizable shapes. 

The graph neural network model learned what happens when tiny bits of materials get poked or otherwise manipulated. The robot can then use the model to predict how solids or liquids will react to touch. The GNN also facilitates learning from existing data when physics introduces unknown factors. 

Once the researchers trained the graph neural network, it allowed a two-fingered robotic hand to form a piece of foam into a desired shape. Besides helping a robot anticipate the effects of touching an object or substance, the model dynamically improves how the machine handles it. As a result, the GNN allows a robot to form 3D objects somewhat similar to how humans do when prodding or squeezing something malleable. 

An Exciting Future for Graph Neural Networks

As researchers learn to build more types of highly functional neural networks, machine learning possibilities will grow, too. These GNN applications give a collective glimpse of what’s on the horizon, and they seem particularly applicable since people in today’s society so often use graphs to represent information. 

Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commision. Learn more here.


Emily Newton

Emily Newton is a technology and industrial journalist and the Editor in Chief of Revolutionized. She enjoys reading and writing about how technology is changing the world around us.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.