Small Language Models as Graph Classifiers: Towards Lightweight Model Adaptation for Graph-Structured Data

Graph classification is a fundamental task in graph representation learning and is traditionally addressed using graph neural networks that incorporate carefully designed inductive biases. This work focuses on small language models and investigates their suitability for graph classification tasks. By representing graphs in a textual form that captures both structural relationships and node features, we enable language models to operate directly on graph data without relying on graph-specific architectures. Across standard graph classification benchmarks, we show that small language models can learn meaningful graph representations and achieve competitive performance relative to established graph-based methods. Our findings highlight the potential of small language models as a flexible alternative for graph classification, particularly in rapidly changing settings where architectural simplicity and adaptability are critical.

Liked Liked