Academic

From Topic to Transition Structure: Unsupervised Concept Discovery at Corpus Scale via Predictive Associative Memory

arXiv:2603.18420v1 Announce Type: new Abstract: Embedding models group text by semantic content, what text is about. We show that temporal co-occurrence within texts discovers a different kind of structure: recurrent transition-structure concepts or what text does. We train a 29.4M-parameter contrastive model on 373 million co-occurrence pairs from 9,766 Project Gutenberg texts (24.96 million passages), mapping pre-trained embeddings into an association space where passages with similar transition structure cluster together. Under capacity constraint (42.75% accuracy), the model must compress across recurring patterns rather than memorise individual co-occurrences. Clustering at six granularities (k=50 to k=2,000) produces a multi-resolution concept map; from broad modes like "direct confrontation" and "lyrical meditation" to precise registers and scene templates like "sailor dialect" and "courtroom cross-examination." At k=100, clusters average 4,508 books each (of 9,766), confirming

J
Jason Dury
· · 1 min read · 14 views

arXiv:2603.18420v1 Announce Type: new Abstract: Embedding models group text by semantic content, what text is about. We show that temporal co-occurrence within texts discovers a different kind of structure: recurrent transition-structure concepts or what text does. We train a 29.4M-parameter contrastive model on 373 million co-occurrence pairs from 9,766 Project Gutenberg texts (24.96 million passages), mapping pre-trained embeddings into an association space where passages with similar transition structure cluster together. Under capacity constraint (42.75% accuracy), the model must compress across recurring patterns rather than memorise individual co-occurrences. Clustering at six granularities (k=50 to k=2,000) produces a multi-resolution concept map; from broad modes like "direct confrontation" and "lyrical meditation" to precise registers and scene templates like "sailor dialect" and "courtroom cross-examination." At k=100, clusters average 4,508 books each (of 9,766), confirming corpus-wide patterns. Direct comparison with embedding-similarity clustering shows that raw embeddings group by topic while association-space clusters group by function, register, and literary tradition. Unseen novels are assigned to existing clusters without retraining; the association model concentrates each novel into a selective subset of coherent clusters, while raw embedding assignment saturates nearly all clusters. Validation controls address positional, length, and book-concentration confounds. The method extends Predictive Associative Memory (PAM, arXiv:2602.11322) from episodic recall to concept formation: where PAM recalls specific associations, multi-epoch contrastive training under compression extracts structural patterns that transfer to unseen texts, the same framework producing qualitatively different behaviour in a different regime.

Executive Summary

This article presents a novel approach to unsupervised concept discovery at corpus scale via predictive associative memory. The authors develop a 29.4M-parameter contrastive model that maps pre-trained embeddings into an association space, where passages with similar transition structure cluster together. The model discovers a different kind of structure than traditional embedding models, highlighting recurrent transition-structure concepts. The authors demonstrate the effectiveness of their approach through a comprehensive analysis, including validation controls and comparison with embedding-similarity clustering. Their method extends Predictive Associative Memory from episodic recall to concept formation, showcasing its potential for text analysis and information retrieval applications.

Key Points

  • The authors propose a novel approach to unsupervised concept discovery at corpus scale using predictive associative memory.
  • The model maps pre-trained embeddings into an association space, where passages with similar transition structure cluster together.
  • The approach discovers recurrent transition-structure concepts, distinct from traditional embedding models.

Merits

Strength in Concept Discovery

The authors demonstrate the ability of their approach to discover meaningful concepts, such as 'direct confrontation' and 'lyrical meditation', which are coherent and transferable to unseen texts.

Scalability and Efficiency

The model's performance under capacity constraint (42.75% accuracy) showcases its ability to compress across recurring patterns rather than memorise individual co-occurrences, making it scalable and efficient for large corpora.

Demerits

Dependence on Pre-trained Embeddings

The approach relies on pre-trained embeddings, which may limit its applicability to domains with limited training data or novel text formats.

High Computational Requirements

The 29.4M-parameter contrastive model may require significant computational resources and time to train, potentially limiting its deployment in resource-constrained environments.

Expert Commentary

The authors' approach demonstrates a novel application of predictive associative memory to unsupervised concept discovery at corpus scale. While the results are promising, further research is needed to address the limitations, particularly the dependence on pre-trained embeddings and high computational requirements. The approach has significant implications for text analysis and information retrieval applications, and its extension of Predictive Associative Memory to concept formation showcases its potential for cognitive-inspired machine learning models. As such, this work contributes to the ongoing efforts in developing more efficient and effective methods for text analysis and information retrieval.

Recommendations

  • Future research should focus on developing more efficient and scalable methods for training the contrastive model, reducing its dependence on pre-trained embeddings, and exploring its applicability to novel text formats.
  • The approach should be further evaluated on diverse text datasets and applications to validate its robustness and generalizability.

Sources