Graph-based Deep Learning: Approaching a True “Neural” Network

friends, molecules and brains aren’t so different

Mark Cleverley
5 min readApr 19, 2020
Cisco’s security graph centered around WikiLeaks. Domains are nodes, connections are edges (IP, domain, ASN).

Graph networks (or network graphs, or just graphs) are data structures that model relationships between data. They’re comprised of a set of nodes and edges: points and relationships linking them together. I’ve done a brief introduction on them, and modeled US lobbying using a directed acyclic graph. They’re neat.

However, the world of graph theory is huge, extending into multiple scientific disciplines. Like everything in data science, someone eventually asked: “How can I use this in machine learning?”.
As it turns out, that was a fine question to ask. Recently, large developments have been made in the area of graph-based neural networks, or GNNs.

Convoluted Roots

In 2018, Zhou et. al published a large review of various GNN methods & applications that I’m quite fond of. It highlights some interesting points about the development and refinement of learning procedures.

Graph neural nets are rooted in convolutional neural nets. CNNs are well-known for their ability to derive spatial features and craft highly useful matrix representations. They’re widely used in image-based learning because of their use filter maps to “generalize” and learn on similarity instead of exact binary matches (much like the human brain’s tendency to store and recognize patterns).

Source

Three central aspects of CNNs lend themselves naturally to GNNs: shared weights, local connections and multiple layers. Shared weights reduce computational cost, graphs are usually quite locally connected, and multi-layer approaches help solve hierarchical data patterns (when features vary in size).

But where CNNs require Euclidean data in the form of images or text, graphs don’t. It’s tough to find convolutional filters and pooling functions to transform a 2D image to the non-Euclidean domain, but an image can be thought of a limited instance of a graph.
Graphs are naturally non-Euclidean (most data is, though) because triangle inequality doesn’t hold up in mathematical space; similar things aren’t necessarily closer together.

Sociochemical Applications

Any real-world system with defined structure can, in theory, be modeled by a graph. Take enough of these systems (or a large enough system to afford subdivision) and you can feed the data into a GNN that can learn the “habits” of the system, predicting tendencies and subtleties that the human eye would miss.
A prevalent example is social network analysis:

Source

Wolfram visualized a graph of Facebook users as nodes and friend connections as edges, and was able to detect several distinct communities within one network.

The spacing and distribution of nodes on such graphs is commonly determined by ‘number of connections’. If you were to add a new node to the graph and create edges to 20 “online friends” and 3 “family” nodes, it would end up somewhere in the red. In graphs where edges contain properties (numerical weight, etc) spacing becomes more precise.

Social networks grow by the day, and nobody predicts online connections slowing down (a pandemic only increased the number). Qiu et. al created DeepInf, a framework that extracts a user’s local network and feeds it into a GNN to predict “latent social influence”.
It’s a more complicated metric than you might imagine: you may be connected to many people, but are those people highly-connected? Are there specific qualities about those people (age, occupation, location) that influence connectivity? “Influence” is somewhere in between.

Source

Conventional social network analysis has to rely on custom rules and domain knowledge to account for the intricacies of a certain platform (what constitutes “influence” on twitter may differ on WeChat). However, DeepInf’s representation learning approach eliminates the need for labor-intensive labeling, and generally outperforms Linear Regression, Support Vector Machines and PCSN at predicting influence.

Beyond abstract social relationships, graphs also have use in concrete physical sciences. There’s huge potential in biology and chemistry to map molecules to graphs for computational analysis.

Since nodes and edges can hold any property or label (‘covalent’, ‘oxygen’, ‘5’), complex molecules can be effectively represented as a graph of atoms and bonds. This allows GNNs to learn and predict molecular qualities associated with structural patterns.

For example, these nets have found use in Protein Inferface Prediction, a very challenging task necessary for drug discovery. Fout et. al (Colorado State) propose a Graph Convolutional Network that learns ligand and receptor residue markers and merges them for pairwise classification. They found that neighborhood-based GCN methods outperformed state-of-the-art diffusion and SVM methods at predicting interactions between proteins based solely on structure.

Towards A True “Neural” Network

In between abstract social networks and concrete molecular structure, hybrid semi-real systems are finding use in medical studies. Parisot et. al applied GCNs to predict neural diseases based on brain imaging and phenotype encodings. They model a medical population where nodes are brain-image feature vectors and edges encode phenotypic connections. They then apply the model to ABIDE and ADNI databases for Autism and Alzheimers prediction, outperforming contemporary non-graph models at 70% and 80% respectively.

Source

Neural networks are ostensibly designed after the connected neurons of the human brain, but a densely connected neural net is more like a series of fully-connected layers of neurons (which don’t connect to same-layer neurons).

A graph network comes closer to representing the entangled complexity of our own carbon-based networks. Though it’s rather unscientific to jump to grand conclusions, it makes some intuitive sense that models closer in physical structure to the real thing will best approximate the subtleties of reality.

If you found GNNs interesting, Tsinghua university aggregated many relevant papers into a categorized index.

--

--