The Proxy-Free Safe Pause & Resume Layer for AI Agents.

Stop rogue agents in RAM, save their state, and resume safely. The elegant, in-process alternative to LiteLLM and Portkey. 2 lines of code. No proxies.

Open Source · Patent Pending
AeneasSoft — Safe Pause & Resume in action

From Panic to Control in 2 Lines.

Stop wiring callbacks, decorators, and custom loggers. AeneasSoft intercepts every LLM call at the HTTP level — and blocks the dangerous ones in RAM.

Traditional observability — 50+ lines
# Traditional observability setup - 50+ lines of boilerplate
from langchain.callbacks import BaseCallbackHandler
from langchain.callbacks.manager import CallbackManager
import opentelemetry
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc import OTLPSpanExporter
import logging
import json
import time

class CustomAgentTracer(BaseCallbackHandler):
    def __init__(self, service_name: str):
        self.provider = TracerProvider()
        self.exporter = OTLPSpanExporter(endpoint="...")
        self.provider.add_span_processor(
            BatchSpanProcessor(self.exporter)
        )
        trace.set_tracer_provider(self.provider)
        self.tracer = trace.get_tracer(service_name)
        self.logger = logging.getLogger(__name__)
        self._spans = {}
        self._costs = {}

    def on_llm_start(self, serialized, prompts, **kwargs):
        span = self.tracer.start_span("llm_call")
        span.set_attribute("prompts", json.dumps(prompts))
        span.set_attribute("model", serialized.get("model"))
        self._spans[kwargs["run_id"]] = span
        self._costs[kwargs["run_id"]] = time.time()

    def on_llm_end(self, response, **kwargs):
        span = self._spans.pop(kwargs["run_id"])
        elapsed = time.time() - self._costs.pop(kwargs["run_id"])
        span.set_attribute("latency_ms", elapsed * 1000)
        span.set_attribute("tokens", response.usage.total)
        span.end()

    def on_chain_start(self, serialized, inputs, **kwargs):
        span = self.tracer.start_span("chain")
        self._spans[kwargs["run_id"]] = span

    def on_chain_end(self, outputs, **kwargs):
        span = self._spans.pop(kwargs["run_id"], None)
        if span: span.end()

    def on_tool_start(self, serialized, input_str, **kwargs):
        # ... more boilerplate
        pass

# Setup callback manager
tracer = CustomAgentTracer("my-agent-service")
callback_manager = CallbackManager([tracer])

# Pass to every single chain and agent...
chain = LLMChain(llm=llm, callbacks=callback_manager)
AeneasSoft — 2 lines
import agentwatch
agentwatch.init()
# Every LLM call monitored. Rogue agents blocked in RAM.

Works with OpenAI, Anthropic, Gemini, Mistral, Groq, Cohere, Together AI, Fireworks, Azure, Ollama.

Active Defense for Multi-Agent Systems.

Not just observability. Active protection for your AI agents in production.

In-Process Safe Pause & Resume.

Blocks runaway agents in RAM, saves state via on_block hook, and resumes safely. No proxy. No network roundtrip. No single point of failure. Fully open source.

Understand Agent Behavior Instantly.

Stop digging through JSON logs. See exactly why your agent made a decision with interactive causal execution graphs. No more black boxes.

Stop Agent Budget Drain.

Know precisely which agent spent $47.30 and why. Real-time per-agent, per-model cost breakdowns. Set budget limits that actually block.

EU AI Act Ready. (Enterprise)

Generate RSA-signed Article 12 compliance reports with one click. Be audit-ready by design, not by panic. Available as Enterprise feature.

Why AeneasSoft?

Honest comparison. No marketing spin.

FeatureAeneasSoftLiteLLMPortkey
In-Process (No Proxy)YesNo (Proxy)No (Proxy)
Circuit BreakerYesYesYes
Safe Pause & ResumeYesNoNo
EU AI Act ReportsYesNoNo
Setup Time2 linesConfig + proxyConfig + proxy
Open SourceYes (MIT)Yes (MIT)No
Patent ProtectedYes (USPTO)NoNo

The Universal Interceptor for AI Agents.

One interceptor. All providers. Zero configuration.

OA
OpenAI
AN
Anthropic
GG
Google Gemini
MI
Mistral
GQ
Groq
CO
Cohere
TG
Together AI
FW
Fireworks AI
AZ
Azure OpenAI
OL
Ollama

We needed to understand our multi-agent systems, not just monitor them. AeneasSoft's HTTP-level interception gave us that control, instantly.

— Early Adopter, AI Research Lab
REGULATION DEADLINE

EU AI Act Article 12: August 2, 2026.

Is your AI system ready?

Article 12 requires automatic logging of AI system events. AeneasSoft gives you that — plus active defense. Enterprise customers get RSA-signed compliance reports.

Check Article 12 Readiness →

Your Questions. Our Answers.

Stop Guessing. Start Defending.

Open source. No vendor lock-in. 2 lines of code. Full active defense in under 2 minutes.

View on GitHub →