Our purpose
We're an AI research lab that builds products for real problems and publishes what we learn along the way. Not a consultancy. Not an agency. Not a pure research outfit.
We build safer systems
Every product we ship has safety as an architectural property, not an afterthought. Anti-sycophancy, hallucination detection, human escalation — baked in from day one.
Safety is empirical
We treat AI safety as an engineering discipline, not a policy exercise. We test, measure, break things, and publish what we find so others can build on it.
Research-to-product pipeline
The same team that writes the paper builds the product. Our research comes from watching real users, and our products ship with research built in.
One piece of the puzzle
We're a small lab solving specific problems. We collaborate with the broader community, publish openly, and build tools others can use.
What we do
Three things, in a tight loop
Research
Investigate what breaks
We study what goes wrong when AI meets reality — sycophancy, hallucination, governance gaps, agentic failures. We publish our findings as articles, open tools, and technical reports.
Read our research 02Products
Ship what we learn
We turn research into things people use. Each product is a thesis about how AI should work, built into software you can touch. Amy, Lake, Proxy, Red — each exists because we saw a problem and nobody was shipping the fix.
See our products 03Solutions
Build for your context
Some problems are specific to an organization — their data, their risk profile, their users. For those, we work directly with teams to design and operate AI systems that hold up under real-world pressure.
Explore solutionsValues
What we value and how we act
Problems first
We don't start with a technique and look for a problem. We start with a problem and use whatever works.
Structural safety
Guardrails added after launch are band-aids. Safety that's part of the architecture — that shapes what the system can and can't do at a fundamental level — is the only kind that survives contact with real users.
Publish the playbook
We open our methods, our failures, and our findings. Not because we're altruistic — because a larger commons of safe AI knowledge makes every system better, including ours.
Tight loop
Research shapes what we build. What we build generates the data that shapes the next research question. The same small team carries a problem from paper to production.
Build for people downstream
Every system we ship eventually touches a person — a teenager, a support agent, a doctor. That person is who we're accountable to. Not the benchmark. Not the investor deck.
Independence
Self-funded. Answerable to users.
Obvix Labs is a private limited company incorporated in Bangalore, India. We are self-funded, independent, and answerable to our users before anyone else.
We're small enough to move fast and honest enough to prioritize the fix over the metric. We don't have a base model to protect. We don't have an ad-supported chat product where engagement beats accuracy. We have the freedom to build what's actually needed — and we've structured the company to keep it that way.
Our name comes from the idea that the best solutions should feel obvious in hindsight — simple, clear, inevitable. The "x" is because nothing about building them is.
We're looking for people who care about getting AI right
Researchers, engineers, and collaborators who want to build AI systems the world can actually trust.