Evolutionary Reasoning Does Not Arise in Standard Usage of Protein Language Models
Abstract
Protein language models (PLMs) are often assumed to capture evolutionary information by training on large protein sequence datasets. Yet it remains unclear whether PLMs can reason about evolution—that is, infer evolutionary relationships between sequences. We test this capability by evaluating whether standard PLM usage, frozen or fine-tuned embeddings with distance-based comparison, supports evolutionary reasoning. Existing PLMs consistently fail to recover phylogenetic structure, despite strong performance on sequence-level tasks such as masked-token and contact prediction. We present P <sc>hyla</sc> , a hybrid state-space and transformer model that jointly processes multiple sequences and is trained using a tree-based objective across 3,000 phylogenies spanning diverse protein families. P <sc>hyla</sc> outperforms the next-best PLM by 9% on tree reconstruction and 23% on taxonomic clustering while remaining alignment- and guide-tree-free. Although classical alignment pipelines achieve higher absolute accuracy, P <sc>hyla</sc> narrows the gap and achieves markedly lower end-to-end runtime. Applied to real data, P <sc>hyla</sc> reconstructs biologically accurate clades in the tree of life and resolves genome-scale relationships among Mycobacterium tuberculosis isolates. These findings suggest that, under standard usage, evolutionary reasoning does not reliably emerge from large-scale sequence modeling. Instead, P <sc>hyla</sc> shows that models trained with phylogenetic supervision can reason about evolution more effectively, offering a biologically grounded path toward evolutionary foundation models.
Related articles
Related articles are currently not available for this article.