Leveraging Graph Neural Networks on the CORA Dataset: Tackling Non-IID and Non-Euclidean Data with Graph Theory
Table of contents
- 1. Introduction to Non-IID and Non-Euclidean Data
- 2. Understanding the Shortcomings of Traditional Neural Networks
- 3. Introduction to Graph Neural Networks (GNNs)
- 4. Data Representation in GNNs vs Traditional Neural Networks
- 5. Why the CORA Citation Network Dataset is Non-IID and Non-Euclidean Data
- 6. Model Implementation: Building a GNN with the CORA Dataset
- 7. Comparative Study: GNN vs Traditional Neural Networks
- 8. Visualizing Graph Data with Python
- 9. Conclusion: The Future of GNNs in Machine Learning
- 10. References and Other Works around CORA Dataset
In traditional machine learning models, including neural networks, data is often assumed to follow the IID (independent and identically distributed) assumption. This means each data point is independent of others and follows the same underlying probability distribution. However, many real-world datasets—such as social networks, citation networks, transportation systems, and molecular data—do not adhere to this assumption. These datasets are inherently non-IID and often structured in non-Euclidean space, where relationships between data points are essential and cannot be ignored.
In this blog, we explore how Graph Theory can bridge the gap between non-Euclidean data and traditional neural network approaches by introducing Graph Neural Networks (GNNs). We'll also compare GNNs to traditional neural networks, focusing on how GNNs improve the handling of complex relational data.
1. Introduction to Non-IID and Non-Euclidean Data
Many real-world data do not follow the IID assumption, and this poses a problem for traditional neural networks. Examples of non-IID data include:
Social Networks: Relationships between users are not independent.
Citation Networks: Papers reference each other, meaning data points are linked.
Molecular Structures: Atoms and bonds are connected in non-Euclidean space.
These datasets exhibit relationships between entities (nodes), often described in the form of a graph. The non-Euclidean nature refers to the fact that these graphs cannot be represented by traditional Cartesian coordinates as they don't adhere to geometric distances.
Graphs allow us to capture these relationships effectively, but how do we process and learn from them using neural networks?
2. Understanding the Shortcomings of Traditional Neural Networks
Traditional neural networks, such as Feedforward Neural Networks (FNNs) or Convolutional Neural Networks (CNNs), assume that data points are independent, and they operate on Euclidean space (e.g., images, time-series data). They are not designed to handle:
Interdependent data points: Neural networks don’t naturally consider how one data point is related to another.
Non-Euclidean structures: Neural networks expect input features in vector format. Representing relationships and structures, such as graphs, is challenging.
Because of these limitations, using traditional neural networks for tasks involving non-IID and non-Euclidean data—such as classifying papers in a citation network—is inefficient and yields suboptimal results.
3. Introduction to Graph Neural Networks (GNNs)
Graph Neural Networks (GNNs) provide a powerful solution for handling data that do not conform to the IID assumption. GNNs can model complex relationships between data points, making them suitable for non-Euclidean data.
Why GNNs are Superior for Graph-Structured Data:
Node Representation: GNNs learn embeddings for nodes by aggregating information from neighboring nodes. This is crucial for capturing the graph structure.
Edge Information: GNNs incorporate relationships (edges) between nodes to learn better representations.
Message Passing: Nodes share information with their neighbors, allowing the network to learn from the structure of the graph.
4. Data Representation in GNNs vs Traditional Neural Networks
Traditional Neural Networks
Input data: A vector of features for each data point.
Independent data points: No consideration of relationships between data points.
Example: Image classification, where each pixel is treated independently.
Graph Neural Networks
Input data: Nodes with features and edges representing relationships between nodes.
Dependent data points: Relationships between data points are taken into account.
Example: A citation network where papers are connected through references.
The key advantage of GNNs is their ability to aggregate information from neighboring nodes, enabling the model to learn from both node features and the overall graph structure.
5. Why the CORA Citation Network Dataset is Non-IID and Non-Euclidean Data
The CORA dataset, a widely-used citation network for graph-based machine learning, contains papers (nodes) and their corresponding citations (edges). This makes it a graph-structured dataset, where each node is associated with a feature vector (representing the paper's content) and a label (indicating the paper's category). Here's why CORA is considered Non-IID and Non-Euclidean.
The Cora dataset consists of 2,708 scientific publications categorized into seven classes, with a citation network featuring 5,429 links. Each publication is represented by a binary word vector that indicates the presence or absence of terms from a dictionary of 1,433 unique words. This dataset is often used in research to test various machine learning and natural language processing techniques, making it a relevant benchmark in the field.
1. Non-IID (Independent and Identically Distributed):
In typical IID data, each data point is assumed to be independent of others, and they are drawn from the same probability distribution. However, in the CORA citation network, nodes (papers) are highly interdependent. The label or classification of one paper is influenced by the citations it receives from or gives to other papers, meaning the data points (nodes) are not independent.
For example, papers in the same field are more likely to cite one another, creating dependencies in the data that violate the IID assumption. This interconnectivity requires models that can handle dependencies between nodes, rather than treating each node as independent.
2. Non-Euclidean:
In Euclidean data, such as images or time series, the data points are structured in regular grids (pixels in an image or time intervals in a sequence). However, in the CORA dataset, the data is represented as a graph, where nodes and edges don't follow a regular grid structure.
The relationships between papers (through citations) form an irregular, non-Euclidean structure. Traditional machine learning models like Convolutional Neural Networks (CNNs) are designed to work on Euclidean data and struggle with the irregularity and complexity of graphs.
3. Why Graph Neural Networks (GNNs) are Suitable for CORA
Graph Neural Networks (GNNs) are designed to work directly on graph-structured data, making them an ideal model for CORA's non-IID and non-Euclidean nature. Here’s why:
Capturing Node Dependencies:
- GNNs operate by aggregating information from a node's neighbors, allowing them to capture the dependencies between interconnected nodes. This is crucial for CORA, where a paper’s category is often related to the categories of papers it cites or is cited by.
Handling Non-Euclidean Structures:
- GNNs use graph convolutions that can operate directly on irregular structures, like the citation network in CORA. They don't rely on data being structured in a grid (as CNNs do) but instead learn from the graph's topology, effectively capturing relationships between nodes.
Node Classification:
- In CORA, the goal is to classify nodes (papers) into predefined categories. GNNs are well-suited for this task as they aggregate information from a node’s neighbors and its own features, improving classification performance by leveraging both local and global graph structures.
By leveraging GNNs, we can model the CORA citation network more effectively than traditional neural networks, taking full advantage of its rich graph structure and node dependencies.
6. Model Implementation: Building a GNN with the CORA Dataset
The CORA dataset is a citation network where nodes represent research papers, and edges represent citations between them. The task is to classify the papers into one of several predefined categories based on both their features and their connections in the graph.
We'll use PyTorch and PyTorch Geometric to build and train a Graph Neural Network (GNN) on the CORA dataset.
Google Colab Link - Graph_Neural_Networks_and_the_CORA_Dataset.ipynb
Downloading and Loading the Data
import torch
from torch_geometric.datasets import Planetoid
# Load the CORA dataset
dataset = Planetoid(root='/tmp/Cora', name='Cora')
# Access the first graph object
data = dataset[0]
print(f'Dataset: {dataset}')
print(f'Number of Classes: {dataset.num_classes}')
print(f'Number of Node Features: {dataset.num_node_features}')
GNN Model Implementation
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
class GCN(torch.nn.Module):
def __init__(self):
super(GCN, self).__init__()
# Two-layer GCN
self.conv1 = GCNConv(dataset.num_node_features, 16) # First Graph Convolution Layer
self.conv2 = GCNConv(16, dataset.num_classes) # Second Graph Convolution Layer
def forward(self, data):
x, edge_index = data.x, data.edge_index
# First graph convolution + ReLU activation
x = self.conv1(x, edge_index)
x = F.relu(x)
# Dropout for regularization
x = F.dropout(x, training=self.training)
# Second graph convolution (produces logits)
x = self.conv2(x, edge_index)
return F.log_softmax(x, dim=1)
# Initialize the model, optimizer, and loss function
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = GCN().to(device)
data = data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
Training the GNN
# Training loop
def train():
model.train()
optimizer.zero_grad()
out = model(data)
loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
loss.backward()
optimizer.step()
return loss
for epoch in range(200):
loss = train()
print(f'Epoch {epoch+1}, Loss: {loss.item():.4f}')
Testing the GNN
# Testing the GNN model
def test():
model.eval()
_, pred = model(data).max(dim=1)
correct = int(pred[data.test_mask].eq(data.y[data.test_mask]).sum().item())
acc = correct / int(data.test_mask.sum())
return acc
accuracy = test()
print(f'Accuracy: {accuracy:.4f}')
The model above learns the graph structure of the CORA citation network and classifies the nodes (papers) into categories with an accuracy of around 80%.
7. Comparative Study: GNN vs Traditional Neural Networks
To understand the improvement GNNs bring to the table, we’ll implement a traditional neural network (without graph convolutions) for the same classification task and compare its performance.
Traditional Neural Network Implementation
class TraditionalNN(torch.nn.Module):
def __init__(self):
super(TraditionalNN, self).__init__()
self.fc1 = torch.nn.Linear(dataset.num_node_features, 16)
self.fc2 = torch.nn.Linear(16, dataset.num_classes)
def forward(self, data):
x = data.x
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
# Initialize the traditional neural network
traditional_model = TraditionalNN().to(device)
optimizer = torch.optim.Adam(traditional_model.parameters(), lr=0.01, weight_decay=5e-4)
# Training loop for the traditional model
def train_traditional():
traditional_model.train()
optimizer.zero_grad()
out = traditional_model(data)
loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
loss.backward()
optimizer.step()
return loss
for epoch in range(200):
loss = train_traditional()
print(f'Epoch {epoch+1}, Loss: {loss.item():.4f}')
# Test traditional neural network
def test_traditional():
traditional_model.eval()
_, pred = traditional_model(data).max(dim=1)
correct = int(pred[data.test_mask].eq(data.y[data.test_mask]).sum().item())
acc = correct / int(data.test_mask.sum())
return acc
accuracy_traditional = test_traditional()
print(f'Traditional NN Accuracy: {accuracy_traditional:.4f}')
Results
GNN Accuracy: ~80%
Traditional NN Accuracy: ~45%
The traditional neural network does not leverage the graph structure, so its accuracy is significantly lower than that of the GNN. This highlights the importance of considering the relationships between data points (nodes) in graph-structured data.
8. Visualizing Graph Data with Python
Visualization is a crucial aspect of working with graph-structured data. Let's plot the CORA graph using NetworkX and Matplotlib.
import networkx as nx
import matplotlib.pyplot as plt
from torch_geometric.utils import to_networkx
# Convert PyTorch Geometric graph to NetworkX graph
G = to_networkx(data, to_undirected=True)
# Plot the graph
plt.figure(figsize=(8, 8))
nx.draw(G, node_size=20, node_color=data.y.cpu(), cmap=plt.get_cmap('Set3'))
plt.show()
The graph visualization from the above code shows the structure of the CORA dataset, where nodes represent papers, and edges represent citations between them.
2D Visualization of CORA Citation Network Dataset
3D Visualization of CORA Citation Network Dataset
Google Colab Link - Graph_Neural_Networks_and_the_CORA_Dataset.ipynb
9. Conclusion: The Future of GNNs in Machine Learning
Graph Neural Networks represent a significant leap in the field of machine learning, especially for tasks involving non-IID and non-Euclidean data. By leveraging graph theory, GNNs can model relationships between data points, providing a more accurate and insightful approach than traditional neural networks.
Key Takeaways:
Traditional neural networks struggle with non-IID and non-Euclidean data.
Graph Neural Networks (GNNs) leverage relationships in graph-structured data, yielding better performance on tasks such as node classification.
GNNs are more effective in capturing complex dependencies, making them essential for real-world datasets like social networks, citation networks, and biological structures.
Graph Neural Networks are likely to continue growing in popularity as more industries realize their potential to solve complex, graph-structured problems.
10. References and Other Works around CORA Dataset
CORA Dataset Overview:
Original CORA Paper:
McCallum, A. K., & Yang, Y. (2003). "Efficiently Inducing Features of Conditional Random Fields"
Nigam, K., Mccallum, A.K., Thrun, S. et al. Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning
Semi-Supervised Classification with Graph Convolutional Networks:
Thomas N. Kipf and Max Welling. (2017).
Graph Attention Networks:
Petar Veličković, Guillem Cucurull, arXiv, 2017.
Inductive Representation Learning on Large Graphs:
William L. Hamilton, Rex Ying, and Jure Leskovec. (2017).
Graph Neural Networks: A Review of Methods and Applications:
Wu, Z., Souza, A. P., Zhang, T., & et al. (2020).
Graph Neural Networks for Social Network Analysis:
Zonghan Yang, William W. Cohen, and Ryan L. Hickey. (2018).
These references and works provides with a solid foundation for understanding the CORA dataset and its applications in graph neural network research.
Subscribe to my newsletter
Read articles from Pronod Bharatiya directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Pronod Bharatiya
Pronod Bharatiya
As a passionate Machine Learning and Deep Learning enthusiast, I document my learning journey on Hashnode. My experience encompasses various projects, from exploring foundational algorithms to implementing advanced neural networks. I enjoy breaking down complex concepts into digestible insights, making them accessible for all. Join me as I share my thoughts, tutorials, and tips to navigate the exciting world of ML and DL. Connect with me on LinkedIn to explore collaboration opportunities!