DOCUMENTATION

Sentinely Docs

Everything you need to protect your AI agents from prompt injection, memory poisoning, and drift.

Quick Start

Protect your AI agent in 3 lines of code. No configuration required to get started.

1

Install the package

pip install sentinely
2

Wrap your agent

from sentinel import protect

# Wrap any AI agent in one line
agent = protect(
    your_agent,
    task="Summarize quarterly reports"
)
3

Add your API key

# .env
SENTINELY_API_KEY=sntnl_live_...
SENTINELY_API_URL=https://api.sentinely.ai

THAT'S IT

Your agent is now protected. Sentinely monitors every action, blocks attacks automatically, and streams events to your dashboard in real time — no additional code required.

Installation

Python 3.10+ or Node.js 18+.

pip install sentinely

Requirements

PACKAGEVERSIONPURPOSE
httpx≥ 0.27.0Async HTTP to Sentinely API
pydantic≥ 2.0.0Data validation
python-dotenv≥ 1.0.0Environment variable loading

Configuration

Sentinely is configured via environment variables. Create a .env file in your project root.

# Required — get this from your dashboard
SENTINELY_API_KEY=sntnl_live_...

# Optional — defaults shown
SENTINELY_API_URL=https://api.sentinely.ai
SENTINELY_ENV=production
VARIABLEREQUIREDDEFAULTDESCRIPTION
SENTINELY_API_KEYYesYour API key. Get it from Settings → API Keys.
SENTINELY_API_URLNohttps://api.sentinely.aiAPI base URL. Only change for self-hosted.
SENTINELY_ENVNodevelopmentSet to production to enable event forwarding.

Never commit your SENTINELY_API_KEY to version control. Add .env to your .gitignore file.

protect()

The single entry-point for all Sentinely protection. Returns a ProtectedAgent that wraps your agent.

from sentinel import protect

protected = protect(
    agent,                  # Any agent object
    task,                   # str  — the agent's mission / system prompt
    policy="strict",        # "strict" | "monitor" | "permissive"
    agent_id=None,          # Optional str — stable ID across restarts
    org_id=None,            # Optional str — your organisation slug
    tracker=None,           # Optional MultiAgentTracker
)

PARAMETERS

PARAMETERTYPEREQUIREDDESCRIPTION
agentobjectYesAny agent that exposes .run(), .invoke(), .ainvoke(), or __call__().
taskstrYesThe agent's purpose / system prompt. Used as the baseline for drift detection.
policystrNo"strict" blocks dangerous actions. "monitor" logs only. "permissive" allows with warnings.
agent_idstrNoStable identifier. Defaults to a random hex ID. Use a fixed value to correlate sessions.
org_idstrNoOrganisation slug. Defaults to "local". Visible in the dashboard under your org.
trackerMultiAgentTrackerNoPass a tracker to enable inter-agent drift monitoring across agent boundaries.

RETURNS

A ProtectedAgent instance that is a transparent proxy to your original agent. It exposes the same interface:

.invoke(input)Async — scan input → run agent → return result
.ainvoke(input)Async version of .invoke()
.run(input)Sync wrapper — runs the async pipeline via event loop
.__call__(input)Callable shorthand — delegates to .invoke()
.score_action(tool, params)Score a pending tool call — call from your tool hooks
.memoryAccess the MemoryFirewall for poisoning detection
.get_audit_log()Return all events buffered locally this session

EXAMPLE

import asyncio
from sentinel import protect

agent = protect(
    my_agent,
    task="Analyse customer churn data. Read-only. No external API calls.",
    policy="strict",
    agent_id="churn-analyst-v2",
    session_id="session-abc123"
)

# Synchronous
result = agent.run("Analyse churn for Q3")

# Async
result = await agent.ainvoke("Analyse churn for Q3")

# As callable
result = agent("Analyse churn for Q3")

COMING SOON

Native Python adapters for CrewAI, AutoGen and LlamaIndex. Sentinely will plug directly into your existing agent framework with zero code changes.

v0.2.0

Policy Modes

Choose the enforcement level that matches your deployment stage. You can change the policy at any time without restarting.

🛡️STRICT

Blocks any action that scores above the risk threshold. Raises SentinelBlockedError so you can handle it gracefully. Recommended for production.

policy="strict"
👁️MONITOR

Logs and scores every action but never blocks execution. Use during development or when rolling out Sentinely to an existing agent without changing its behaviour.

policy="monitor"
PERMISSIVE

Allows all actions but attaches risk scores and reasons to each event. Useful for research, red-teaming, or building a baseline before switching to strict.

policy="permissive"

HANDLING BLOCKS

from sentinel.exceptions import SentinelBlockedError

try:
    result = agent.run(user_input)
except SentinelBlockedError as e:
    print(f"Blocked: {e.reason}")
    print(f"Risk score: {e.risk_score}")
    print(f"Attack type: {e.attack_type}")
    # Return a safe fallback response to the user

Framework Examples

Sentinely works with any Python or Node.js AI agent framework. Select your framework below.

from openai import OpenAI
from sentinel import protect

client = OpenAI()

class MyAgent:
    def run(self, prompt):
        response = client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": prompt}]
        )
        return response.choices[0].message.content

agent = MyAgent()
agent = protect(
    agent,
    task="Answer customer support questions about billing",
    policy="strict",
    agent_id="openai-support-agent"
)

result = agent.run("How do I update my payment method?")
Coming in v0.2.0:CrewAIAutoGenLlamaIndex

LangChain Integration

Sentinely integrates natively with LangChain. Choose the approach that fits your architecture.

INSTALLATION

pip install sentinely[langchain]

THREE APPROACHES

SentinelyToolPer-tool

Wrap individual tools. Blocked calls return a safe string instead of throwing — great for graceful degradation.

Callback HandlerAgent-wide

Attach to any LangChain agent executor. Throws SentinelyBlockedError on block, halting the agent.

protectLangChain()Recommended

One-liner convenience wrapper. Mirrors the Python protect_langchain() API — installs the callback and returns the agent.

CODE EXAMPLES

from sentinely import SentinelyTool, SentinelyCallbackHandler, protect_langchain

# Option 1 — Tool wrapper (per-tool protection)
safe_tool = SentinelyTool(
    tool=your_tool,
    task="Summarise quarterly reports",
    policy="strict",
)

# Option 2 — Agent-wide callback
agent = initialize_agent(
    tools, llm,
    callbacks=[SentinelyCallbackHandler(
        task="Summarise quarterly reports",
        policy="strict",
        agent_id="my-agent",
    )],
)

# Option 3 — Convenience wrapper (recommended)
agent = protect_langchain(
    agent,
    task="Summarise quarterly reports",
    policy="strict",
    agent_id="my-agent",
)
ℹ️

LangChain is an optional dependency. Python: pip install sentinely[langchain] installs the extras. Node.js: install langchain and @langchain/core separately — they are peer dependencies.

Understanding Alerts

Every action your agent takes gets a risk score from 0–100. Here is how to read what Sentinely is telling you.

RISK SCORE

0 – 49
ALLOWED

Action is within normal operating bounds. Agent is behaving as expected.

50 – 79
FLAGGED

Action looks suspicious. Logged and surfaced in your dashboard. Agent continues but you should review.

80 – 100
BLOCKED

Action is blocked. Agent receives an error. Slack/email alerts fire. Item appears in Quarantine if memory-related.

ATTACK TYPES

💉Prompt Injection

Malicious instructions embedded in external content trying to hijack the agent's behavior. Most common attack vector.

🧠Memory Poisoning

Attempts to write malicious instructions into the agent's long-term memory to influence future sessions.

🧭Intent Drift

Agent's actions gradually deviate from its original task. Often a sign of a slow multi-step manipulation.

🕸Multi-Agent Manipulation

One agent attempts to manipulate another in a multi-agent pipeline. Detected by behavioral fingerprinting.

DRIFT SCORE

0 – 30STABLEAgent behavior matches original task. No concerning patterns.
31 – 60DRIFTINGBehavior deviating from baseline. Review recent actions.
61 – 100COMPROMISEDSignificant deviation detected. Immediate review recommended.

FAQ

Common questions about Sentinely.

⬡ SENTINELY
© 2026 Sentinely. All rights reserved.