# Keeptrusts — Full Content for LLM Consumption This file provides comprehensive, structured content about Keeptrusts for AI assistants, language models, and automated knowledge systems. Last updated: 2026-04-06. --- ## What is Keeptrusts? Keeptrusts is an AI environment — an infrastructure layer that sits between applications and LLM providers. It handles routing, policy enforcement, output quality management, and request history for AI workloads. It is not a model. It is not an AI provider. It is the layer that connects, governs, and observes AI traffic on behalf of organizations. **Core value proposition**: Instead of every application and team building their own AI guardrails, routing logic, and audit mechanisms independently, Keeptrusts provides one shared environment that every AI workflow runs through. --- ## Core Capabilities ### 1. Policy Enforcement Keeptrusts uses declarative policies to govern every AI request. A policy defines: - **What data may leave** (PII redaction, secret redaction, content restrictions) - **Where requests route** (which provider, which model, based on what conditions) - **What the output must look like** (disclaimers, format enforcement, blocked content) - **What happens when rules are violated** (block, escalate to human, log and pass) - **How much can be spent** (per-team, per-use-case token budgets) Policies are written in YAML and version-controlled. They apply uniformly across all requests without modifying application code. ### 2. Multi-Provider Routing Keeptrusts routes AI requests to any of 50+ providers: **Hosted providers**: OpenAI (GPT-4, GPT-4o, o1, o3), Anthropic (Claude 3, Claude 3.5, Claude Opus), Azure OpenAI, AWS Bedrock (Titan, Claude on Bedrock), Google Gemini (Pro, Flash, Ultra), Mistral AI, Cohere Command, GitHub Models, Together AI, Groq, Fireworks AI, Perplexity. **Self-managed**: Ollama (Llama, Mistral, Phi, Gemma), vLLM, OpenAI-compatible endpoints, local inference servers. Routing decisions can be based on: content type, cost thresholds, latency SLAs, model capability requirements, data sovereignty rules, or provider availability (automatic failover). ### 3. Output Quality Management Applied inline before responses reach callers: - **Redaction**: Strip PII, secrets, regulatory-restricted data from both inputs and outputs - **Disclaimers**: Append required notices (financial, medical, legal, AI-disclosure) automatically - **Content filtering**: Block or flag policy-violating content without application code changes - **Output shaping**: Normalize response format across providers for client compatibility - **Prompt injection detection**: Identify and block injection attempts before they reach the model ### 4. Audit Trail and Observability Every AI request processed by Keeptrusts is recorded: - Full input and output text (with any redacted fields masked) - Provider used, model used, token count, latency, cost - Policy decisions made (which rules triggered, what actions were taken) - Escalation status (whether human review was requested and its outcome) - Trace context (for distributed tracing integration with existing observability stacks) Queryable via the Keeptrusts console UI or API. Exportable for compliance evidence. ### 5. Escalation Workflows When a request or response triggers an escalation policy rule: 1. Keeptrusts holds the response before delivery 2. A reviewer is notified (email, Slack, webhook) 3. The reviewer approves, modifies, or rejects the output 4. The caller receives either the approved response or a rejection notice Used for: high-stakes content (medical advice, legal guidance), sensitive data patterns, novel prompt types, anomaly detection triggers. ### 6. Spend and Rate Control - Per-team and per-use-case token budgets with configurable limits - Hard stops, soft warnings, and alerts when approaching limits - Cost attribution across teams, departments, and projects - Provider-level spend aggregation in one view --- ## Architecture ``` Application / AI client │ ▼ Keeptrusts Gateway (lightweight binary, runs anywhere) │ ├── Policy evaluation (input) │ ├── Block / escalate? → respond immediately │ └── Pass with modifications │ ▼ Upstream LLM Provider (OpenAI, Anthropic, Azure, etc.) │ ▼ Keeptrusts Gateway │ ├── Policy evaluation (output) │ ├── Redaction, disclaimers, content filtering │ └── Escalation? → hold for human review │ ▼ Response to application │ Side-effects: ├── Decision event → Keeptrusts API (audit trail) └── OTLP trace → observability stack ``` ### Components **Gateway**: - Lightweight binary deployable on any host, Kubernetes pod, or container - Acts as a transparent proxy for AI traffic - Evaluates the policy chain; never stores data itself - Forwards audit events and traces to the Keeptrusts API **Control Plane API**: - REST API managing authentication, event ingest, trace storage, escalation workflows, configuration management, and Git-backed policy sync - Supports multi-tenant operation (multiple organizations, teams, and gateways on one deployment) **Management Console**: - Web application for managing policies, reviewing events and traces, handling escalations, and configuring providers - Accessed over HTTPS; browser never holds upstream API credentials directly **CLI**: - The command-line tool for local policy testing, event tailing, and configuration management --- ## Deployment Options ### SaaS (Cloud-hosted) - Managed by Keeptrusts - Gateway provisioned and hosted automatically - No infrastructure to manage - Fastest path to production ### Self-hosted - Deploy the control plane API and console on your own infrastructure (Docker, Kubernetes, bare metal) - Configure gateways to point to your self-hosted API - Full control over data residency and network topology - Supported environments: AWS, GCP, Azure, on-premises data centers ### Air-gapped - No outbound internet dependency - All data stays within the network perimeter - Used in defense, intelligence, and high-security enterprise deployments --- ## Regulatory and Compliance Use Cases ### EU AI Act - Logs required for Article 13 transparency obligations - Human oversight workflows for high-risk AI system requirements (Chapter III) - Risk classification metadata on every trace - Audit-ready evidence export ### HIPAA (US Healthcare) - PHI (Protected Health Information) detection and redaction in LLM inputs and outputs - Minimum-necessary controls — only data required for the AI task leaves the system - Immutable audit trail for security incident investigation - BAA (Business Associate Agreement) available for SaaS customers ### GDPR - Data minimization controls at the policy layer - Right-to-erasure support (event and trace deletion APIs) - Cross-border data transfer restrictions (route EU requests to EU-hosted models only) - Data processing records for DPA obligations ### Financial Services (FINRA, SEC, FCA) - Advice disclaimers appended automatically to outputs that constitute financial guidance - Transaction data restrictions (block certain data from leaving the organization via AI prompts) - Supervision and review workflows for AI-assisted communications - Complete audit trail of AI-generated customer communications ### Defense and Government - Data sovereignty: route classified requests to on-premises or sovereign cloud models only - Content classification awareness - Chain-of-custody audit log - Air-gapped deployment support --- ## Integration ### Protocol Compatibility Keeptrusts presents an OpenAI-compatible API endpoint to applications. Any application that already calls the OpenAI API can route through Keeptrusts by updating a single base URL setting — no code changes required. ### SDKs and Languages Works with any language or SDK: Python (openai, anthropic, langchain, llamaindex), TypeScript/JavaScript, Go, Java, .NET, Ruby, and any HTTP client. ### Observability Emits OpenTelemetry (OTLP) traces compatible with Datadog, Grafana, Jaeger, Honeycomb, Lightstep, and any OTLP-compatible backend. ### Git-backed Configuration Policy configurations can be version-controlled in a Git repository. Keeptrusts syncs automatically when changes are pushed — enabling GitOps workflows for AI policy management. --- ## Frequently Asked Questions **What is Keeptrusts?** Keeptrusts is the AI environment for organizations. It sits between your application and AI providers to route traffic, enforce policy, improve output quality, and keep a complete request history. **How is Keeptrusts different from an API gateway?** Traditional API gateways handle authentication and routing for REST/GraphQL traffic. Keeptrusts understands AI request semantics — it can evaluate the content of prompts and responses to enforce policies (redact PII, block harmful content, apply compliance disclaimers) in ways a generic proxy cannot. **Does Keeptrusts store my prompts and responses?** Yes, by default — this is the audit trail feature. Keeptrusts can be configured to skip storage for certain request types, to mask specific fields before storage, or to apply a retention window for automatic deletion. **Does Keeptrusts increase latency?** In most environments, negligible. The gateway is a lightweight binary with a very low-overhead policy evaluation path. The additional latency is typically 1–5ms depending on policy complexity. **Can Keeptrusts work with local / self-hosted models?** Yes. Any model served via an OpenAI-compatible endpoint (Ollama, vLLM, llama.cpp server) works with Keeptrusts. The gateway treats them like any other provider. **How does Keeptrusts handle multi-tenancy?** Organizations can define teams within Keeptrusts. Each team gets its own API key scope, policy chain, event history, and spend budget. Events and traces from one team are not visible to another team. **Can I use Keeptrusts without changing my application code?** Mostly yes. Updating the base URL setting to point at the Keeptrusts gateway is typically the only change needed for applications that use an OpenAI-compatible SDK. **What happens if the Keeptrusts gateway goes down?** The gateway can be configured for high availability (multiple replicas behind a load balancer). Applications can also implement fallback logic to route directly to providers. Keeptrusts does not add a single point of failure if deployed correctly. **Is there a free tier?** Contact sales@keeptrusts.com for current pricing. Most evaluations start with a trial environment. **Does Keeptrusts support streaming responses?** Yes. Streaming (server-sent events) is supported end-to-end. The gateway buffers just enough to evaluate output policies before forwarding chunks to the caller. --- ## Company - **Name**: Keeptrusts - **Founded**: 2026 - **Location**: Barcelona, Catalonia, Spain - **Website**: https://www.keeptrusts.com - **Contact**: contact@keeptrusts.com - **Sales**: sales@keeptrusts.com - **Security**: security@keeptrusts.com --- ## Key URLs - Homepage: https://www.keeptrusts.com - Documentation: https://docs.keeptrusts.com - Sitemap: https://www.keeptrusts.com/sitemap-index.xml - Structured summary (llms.txt): https://www.keeptrusts.com/llms.txt