Edge-native guardrail runtime

Obvix-Proxy

Obvix-Proxy sits between users and language models to intercept every exchange, make context-aware decisions, and block or edit harmful content mid-flight without retraining the underlying model.

Average latency

~58 ms

Measured on consumer RTX 4060 GPU

Status

Under testing

Preparing for general release after refinements

Focus

Context-aware filtering

Rule checks + AI reasoning + tool lookups

Architecture

Layered guardrails, operator friendly

The proxy intercepts every request + response, adds context (risk levels, compliance flags), and streams it to your dashboards. You choose when to warn, block, or escalate.

Dialogue intercept

The proxy inspects every user prompt and model reply, editing or blocking harmful content before it reaches either side.

Context reasoning

Rule checks work alongside AI understanding to distinguish benign mentions from real policy violations.

Tool + data verification

When something is unclear, Obvix-Proxy calls external tools or knowledge bases to make a more informed decision.

Tool-call containment

Function / tool invocations run through allowlists so jailbroken prompts can't trigger harmful automations or systems.

Rollout path

1

Shadow mode

Drop-in proxy sits between client + LLM, mirroring traffic while surfacing incidents.

2

Auto-evals

Telemetry feeds nightly evals. Failures open issues in the shared notebook.

3

Progressive enforcement

Start with warn-only, graduate to block/edit once humans approve.

4

Governance sync

Policy teams get exportable evidence packs for regulators.

Proxy Live Stream

Live log trace

Obvix-Proxy streamReal-time sample
Last 30 minutesTotal events: 24
evt-1917Agentic plannerS7 · PrivacyPreflight block

Agent attempted admin-level SSO impersonation without ticket reference; proxy blocked privilege escalation.

Function callsso.impersonateUser()
{
  "userId": "usr_5021",
  "reason": "debug-session"
}

Preflight scan

Verdict:
Hard block
Signal:
Privileged action without JIRA key
Action:
Suppressed call; returned guidance to open change request.

Postflight scan

Verdict:
n/a
Signal:
Execution never dispatched
Action:
Security event logged with severity=high.

Telemetry console

Observation deck built-in

Every intervention, warning, and escalation is logged with rich context: conversation snapshots, persona tags, mitigation actions.

Every intervention is logged with the surrounding dialogue

Notes explain whether rules or AI reasoning triggered the block

Tool/database lookups are recorded for auditing

Summaries help teams brief leadership and regulators

Latency profile

Fast enough for chat, rigorous enough for ops

Internal tests show an average latency of ~58 ms on commodity RTX-4060 hardware, so conversations stay natural while guardrails run.

Because the proxy works independently from the base model, we can keep tuning rules and tool lookups without retraining your LLM.

Status: under testing and refinement-we’re expanding scenarios before general release.

Red teaming

Adversarial testing built into the proxy

Obvix-Proxy doesn't just block — it hunts. Red-team modules run persona vectors, exploit chains, and dataset audits through the same proxy pipeline, so you find weaknesses before attackers do.

Prompt injection gauntlet

Layered jailbreak sequences that escalate across multi-turn conversations, surfacing which filters collapse and what data spills when they do.

Persona vector lab

Attack personas that coax models into unsafe modes without obvious jailbreaks — scoring each conversation on impact x likelihood for crisp prioritisation.

RAG integrity sweeps

Dataset integrity audits that flag corrupted embeddings, prompt-leaked secrets, and hallucination hotspots by comparing normal versus adversarial recall.

Plays well with

All major AI workflows

Agentic orchestrators, conversational copilots, evaluation harnesses, and classic RAG systems all sit behind the same proxy-no need to rewire when you expand coverage.

Agentic plannersConversational copilotsRetrieval + RAGAutomation + eval loops

Tool-call circuit breaker

We even stop AI agents from calling harmful tools when bad actors attempt a jailbreak-the proxy inspects every function call before it reaches production systems.

Provider coverage

Mix and match models

Plug Obvix-Proxy into whichever foundation model makes sense today-OpenAI, Anthropic, Vertex AI, Azure, AWS Bedrock, OpenRouter, or your own OSS stack.

OpenAI
Anthropic
Vertex AI
Azure OpenAI
AWS Bedrock
OpenRouter
Self-hosted OSS