A continuous-time recurrent neural network (ctrnn) is a type of neural network well developed and research within the computational biological and brain sciences realm; most notably the research found within the 'International Society for Artificial Life' (ISAL). In short, this simple network is devised of human-like neurons which fire in an attempt to reach a homeostasis defined through the relationships between each neuron and the next. Commonly these machine-learners have been utilised within a larger system to manage cyclical operations like: walking in robotics, autonomous driving (Liquid AI) and simulated body movement (OpenWorm).
In this experiment, I recombine this network with the autopoietic philosophy from Maturana and Varela 1995, simple hebbian learning principles, umwelt design and modern interfaces. The results: a lamp that experiences infancy, later maturing into a symbiotic partner in home environment lighting; a parasite modelled into a visual EEG for LLM behaviour; and conclusively, a singular dynamic system for adaptive interfaces known as 'Seed X'.
This document is constantly being added to, but in this first draft will aim to showcase the fruits of the aforementioned combinations and paint an imagined future. While not a feat of scientific novelty, this piece should be understood simply as design innovation; a philosophical and strategic intervention, opposing the dismal state of design today.
It is worth mentioning that while the experimentation started with autonomous computing, the resulting status intentionally avoids the belaboured involvement of LLMs, not least for their incapability when operating on the edge and their foundational world understanding being rooted in the limited 'human-intelligence', an intelligence Yan-LeCun so accurately describes as generalised to us (humans) and specialised in nature (the world).
A frustration, a premonition and a determination; Early 2025 a coworker at Google asked me what it would look like to have an 'MRI for LLMs'. Magnetic Resonance Imaging or MRI is used for detailed non-invasive 3D images of soft-tissues, organs and bones; the question of an MRI for LLMs really suggests a public system of interpretability for the blackbox models we trust. Logistics notwithstanding, there hardly seemed a simple and generally useful enough single method for expressing the complexities within an LLM; after all, an MRI is a high-tech tool requiring a highly trained professional. The framing here was askew, but the thought was curious; does being able to see the internals of the system better support the general divergence of independent thought by encouraging an auditing of outputs from LLM chatbots?
Combating the convergence of independent thought became an objective, I came to the foundation that 'intelligent' systems should facilitate an individuals ambition, rather than attempt to predict it or direct it; To this end a good friend of mine (Hon-Ming Gianotti) and I sat to design the first version of 'Orbit Writer', a novel writing tool utilising LLMs only in support of adaptive UI and reactive elements, with intentionally 0 generative writing features. The idea being: can we develop an environment that best expresses this idea of a single omni-tool; one that only changes to a pickaxe while you are mid swing, right before striking the rock, one that shrinks and lightens when put in your pocket and one that
Rudimentary design development and a 'vibe-coded' test platform. Founded some core principles.
The tool should not generate for the user.
The tool should not assume the user's intention without sufficient guiding 'residuals' (behaviours).
The tool should always prioritise the singular, in this case writing.
I went to Parsons for this.
Arduino lamp, agentic stack. 4 models, issues with heat, issues with streaming information, issues with dumb models. Model's knowledge about the world actually became a hindrance, bias is everywhere.