QNRs: Toward Language for Intelligent Machines

Abstract:

Learned, quasilinguistic neural representations (QNRs) that upgrade words to embeddings and syntax to graphs can provide a semantic medium that is both more expressive and more computationally tractable than natural language, a medium able to support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both literatures and associative memory. QNR-based systems can draw on existing natural language and multimodal corpora to support the aggregation, refinement, integration, extension, and application of knowledge at scale. The incremental development of QNR-based models can build on current capabilities and methodologies in neural machine learning, and as systems mature, could potentially complement or replace today’s opaque “foundation models” with systems that are more capable, interpretable, and epistemically reliable. Potential applications and implications are broad.

Brief motivation and introduction:
 

Posted in 2021, Featured Research, Macrostrategy, News, selected publications, Selected Publications Date.

Share on Facebook | Share on Twitter