MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot

via arxiv.org

Short excerpt below. Read at the original source.

Recursive language models (RLMs) are an inference technique developed by researchers at MIT CSAIL that treat long prompts as an external environment to the model. Instead of forcing the entire prompt into the model’s context window, the framework allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the text. Rather […]

Read at Source