[P] A minimalist implementation for Recursive Language Models

For the past few weeks, I have been working on a RLM-from-scratch tutorial. Yesterday, I open-sourced my repo.

You can just run `pip install fast-rlm` to install.

– Code generation with LLMs

– Code execution in local sandbox

– KV Cache optimized context management

– Subagent architecture

– Structured log generation: great for post-training

– TUI to look at logs interactively

– Early stopping based on budget, completion tokens, etc

Simple interface. Pass a string of arbitrary length in, get a string out. Works with any OpenAI-compatible endpoint, including ollama models.

RLMs can handle text inputs upto millions of tokens – they do not load the prompt directly into context. They use a python REPL to selectively read context and pass around information through variables.

For the AI regulators: this is completely free, no paywall sharing of a useful open source github repo.

Git repo: https://github.com/avbiswas/fast-rlm

Docs: https://avbiswas.github.io/fast-rlm/

Video explanation about how I implemented it:
https://youtu.be/nxaVvvrezbY

submitted by /u/AvvYaa
[link] [comments]

Liked Liked