Semantic Reasoning Labs
We are building a new foundational model.
Dramatically more efficient. Inherently inspectable.
Who we are
- We are a frontier AI research team based in Prague.
-
We do not work on a larger or better language models,
but on fundamentally different AI architecture to complement LLMs. - Our mission is to make deep reasoning scrutable and orders of magnitude cheaper.
Imagine an AI where
- Reasoning happens on an inspectable, human-readable knowledge base rather than on a set of weights.
- Each reasoning step is mathematically a mapping from knowledge base back to knowledge base, hence steps can be chained indefinitely without leaving the internal representation.
- All reasoning steps are thus explainable and scrutable.
- Training is much more efficient.
- Update of the knowledge base is possible during the inference time and has (in its narrow form) O(1) complexity.
- Hence, the entire knowledge base can act like the task context. Therefore, the context is limited only by the physical memory.
FAQ
Consider a student learning calculus. The data she needs are modest: a textbook, lecture slides, and at most a few hundred examples (so at most a few tens of MB of data in total). As for the compute required, the human brain runs on roughly 20 W of energy. So even if the student spent all waking and sleeping hours of an entire semester learning just calculus (unlikely), her brain's total energy budget would power a single Nvidia B200 for approximately 2 days.
It is clearly physically possible to learn from much less data with much lower compute than with current GPT-based approaches. The method to do so had simply not been discovered yet.
Much of modern machine learning resembles dog training: trial --> reward --> reinforcement until the desired behavior emerges. It works and is also an essential part of human learning. But it is not how humans typically learn structured high-level knowledge.
Humans read. Humans think. Sometimes new concepts click-in immediately, sometimes only after revisiting the material or finding a different explanation better matching our existing mental model.
At Semantic Reasoning Labs, this is the sort of learning we are trying to recreate computationally.
No. Our technology cannot write a haiku in the style of Shakespeare. It is not supplanting LLMs. It complements them brilliantly in areas where they are principally weak. Just like humans combining intuition with structured analytical thinking.
Current LLMs take a conceptually different path. It is not possible to break their fundamental limitations by hiring thousands of engineers to tweak them incessantly. We need a fundamentally different approach. We need to start with a blank sheet of paper.
That is where we come in. A team that has background in research, engineering, philosophy and AI. The team that has spent years thinking differently about AI. The team with the conviction and depth to build something genuinely new.
Extraordinary claims require extraordinary evidence
We agree. Will a demo showing deep reasoning (on a single topic) running on a laptop count? Let us know: curious (a) semanticreasoning.ai