AI Guardrails
Provide guardrails as a service to all GenAI apps.
Three-layer: content safety models + trajectory instrumentation + governance proxy. 8 rail types, 13 PII entities, 7 content categories.
Ready to get started?
Deploy sovereign AI on your infrastructure - in weeks, not months.
Red Hat Certified Operator
The agentic application layer that completes Red Hat AI Enterprise. Eighty pre-built agents, two-hundred-forty governed tools, three-layer guardrails, running on the OpenShift you already operate.
Sovereign by design. Your data never leaves your cluster.
Red Hat AI Enterprise is a world-class LLMOps platform: training, fine-tuning, model serving, model governance. Katonic is the agentic application layer: agent runtime, builders, governed tools, knowledge, end-user portal. Two complete platforms. One OpenShift cluster.
Katonic adds
The agentic application layer
Workroom
End-user portal
Marketplace
80+ pre-built agents
5 Build Paths
No-code to BYOA
Agent Runtime
Framework-agnostic, production-hardened
Knowledge Engine
50+ connectors, hybrid RAG
AI Gateway
2,600+ models, 140+ providers
MCP Tool Gateway
240+ governed tool servers
3-Layer Guardrails
Content safety + trajectory instrumentation + governance proxy
Per-org Silo Isolation
Multi-tenant, hardware-enforced
Red Hat AI Enterprise brings
The LLMOps foundation
Training Hub
Fine-tuning, SDG, Kubeflow Trainer
TrustyAI
Bias and drift monitoring
Llama Stack
Inference and embedding API
vLLM and llm-d
Distributed model serving
OpenShift AI
Pipelines, registry, KServe
OpenShift + GPU Operator
Kubernetes substrate, GPU lifecycle
The decoupled platform pattern. BCG calls 2025 the year agent platforms decouple from the systems they run on, "to facilitate reuse and scaling." The joint stack is exactly that: agent logic and orchestration in one platform, the LLMOps substrate in another, your existing systems unchanged.
BCG, Nov 2025Together, you ship a complete enterprise agentic stack on the OpenShift you already run. No new platform to evaluate, no new team to hire, no new vendor for your platform team to manage.
BCG's enterprise agent research identifies ten components every production agent platform must deliver. Red Hat AI Enterprise plus Katonic covers nine of them by design. The tenth is your existing data platform - we connect to it, we don't replace it.
Provide guardrails as a service to all GenAI apps.
Three-layer: content safety models + trajectory instrumentation + governance proxy. 8 rail types, 13 PII entities, 7 content categories.
Robust prompt lifecycle, agent evaluation, and observability.
Red Hat Training Hub, MLflow registry, evaluation pipelines. Katonic adds prompt versioning and batch eval.
Store MCP server, tool, and agent definitions in one place.
240+ governed MCP tool servers. Full agent versioning with publish, promote, and rollback across environments.
Unified access layer for model endpoints.
AI Gateway: 2,600+ models via 140+ providers. 8 tiers, BYOK per team, budget enforcement, health-aware failover.
Rapid agent creation through visual UI and drag-and-drop.
Five build paths: Guided, AI Builder (conversational), Workflow Designer, Code Builder (IDE), BYOA upload.
Coordinates multi-agent workflows with planning and routing.
Unified runtime plus five framework adapters: LangGraph, CrewAI, Google ADK, raw Python, native.
Manages session state and enforces runtime policies.
Production-grade execution. Hot-reload via Redis Streams. Immutable AgentVersion snapshots for rollback.
Recall past actions and behaviours.
Short-term context window, long-term semantic search, temporal decay. Per-org isolation.
Continuous traceability, cost visibility, evaluation.
Langfuse observability, OpenCost FinOps with per-agent attribution, full audit trail in ClickHouse.
Structured and unstructured data sources agents access.
Your existing Snowflake, Databricks, SAP, S3, SharePoint. Katonic Knowledge Engine connects via 50+ permission-aware connectors.
8
Delivered by Katonic
Guardrails, MCP, Gateway, Builders, Framework, Runtime, Memory, FinOps
1
Joint with Red Hat
LLMOps: Training Hub for models, Katonic for prompts and agent eval
1
Your existing data platform
Snowflake, Databricks, SAP, SharePoint, S3 - we connect, never duplicate
Framework adapted from BCG, "Building Effective Enterprise Agents," AI Platforms Group, November 2025. Component definitions paraphrased for clarity.
Seven layers, one stack. Red Hat owns the inference substrate and LLMOps. Katonic owns the agent runtime, builders, knowledge, and end-user portal. Hardware stays where it is.
Ships as a Red Hat Certified Operator
One install on any OpenShift 4.14+ cluster. Standard OperatorHub. Native lifecycle. Zero special configuration.
Open standards everywhere
OpenAI-compatible APIs. MCP for tools. Llama Stack as an embedding backend. Kubeflow for training. No proprietary SDKs.
Sovereign by design
Air-gapped, on-prem, hybrid, VPC. Same codebase, same UX, your cluster. Per-org silo isolation for telcos and MSPs.
Workroom · Marketplace · Distributor Console
Where employees and tenants use agents
Agent Runtime + 5 Build Paths
Framework-agnostic, production-hardened
AI Gateway · Knowledge Engine · MCP Gateway · Guardrails
Governed access to models, data, tools
Red Hat AI Enterprise
Training Hub, TrustyAI, Llama Stack, KServe, vLLM, llm-d
Red Hat OpenShift Container Platform
Kubernetes substrate, networking, storage
GPU Operator + KAI Scheduler
GPU lifecycle and workload placement
Customer hardware
NVIDIA, AMD, IBM Power. Bare metal, VMware, ROSA, ARO
Each Red Hat AI Enterprise component has a documented Katonic integration: AI Gateway as a vLLM and llm-d provider, Knowledge Engine targets Llama Stack, Studio submits to Training Hub, observability includes TrustyAI signals.
Hyperscaler agent platforms are fast to start and cheap to abandon. They send your data, your tools, and your domain knowledge to someone else's cloud. Red Hat AI Enterprise plus Katonic is the alternative: full agentic depth, on your cluster, governed by your policies.
Top-left
Sovereign, no agents
Generic Kubernetes, build it yourself
● Winning quadrant
Sovereign + full agentic
Bottom-left
Cloud + basic LLM
Direct API to OpenAI etc.
Bottom-right
Cloud-locked agents
Hyperscaler agent platforms
If your agents need to run inside your perimeter, talk to your data without leaving your cluster, and use models you can switch out tomorrow, the joint stack is the only option that ships in 2026.
Two production deployments at the scale enterprises and telcos demand: sovereign, regulated, multi-tenant, national. The architecture pattern your platform team can replicate on your OpenShift cluster today.
MODON
● LiveSaudi Arabia · Industrial · Government
MODON selected Katonic as the foundation for AI services across Saudi Arabia's industrial ecosystem. Sovereign in-kingdom per NDMO. Vision 2030 aligned. Delivered through Takween AI and ASAS.
36+
Industrial cities
8wk
Vs 18mo custom build
100%
In-kingdom data
Pilipinas AI
● LivePhilippines · Telco · National platform
ePLDT white-labeled Katonic to launch a national sovereign AI platform. Banks and government agencies as anchor tenants. Full platform: Workroom, Studio, Control Room. Zero data egress.
115M
Citizens served
90d
Signing to live
0
Bytes left country
We don't start with a year-long custom build. We start with what already works on your OpenShift. Four phases, twelve weeks, one production deployment. Same pattern that took Pilipinas AI from contract to one-hundred-fifteen million served.
Week 0
30-minute briefing with your platform and AI architects. Review existing OpenShift, GPU inventory, data sources, security posture, target use cases.
Week 1-4
Install Katonic Certified Operator on existing OpenShift. Configure SSO via Keycloak federation. Connect first knowledge sources. Validate guardrail policies.
Week 5-8
Activate first agents from the marketplace. Customize for your domain. Wire up MCP tool servers for your enterprise systems. Train champion users.
Week 9-12
Production rollout to first business unit. Eval gates active. FinOps tracking per agent. Dedicated CSM. Path to enterprise-wide expansion.
Concurrent enablement. Your platform team gets Operator-level training. Builders get Studio enablement. Admins get Control Room training. All before week 12.
See deployment FAQ →OpenShift
OpenShift
Self-managed
On bare metal, VMware, KVM
OpenShift
ROSA
Managed
Red Hat + AWS jointly managed
OpenShift
ARO
Managed
Red Hat + Azure jointly managed
OpenShift
OpenShift Dedicated
Managed
Red Hat managed on AWS or GCP
OpenShift
ROKS
Managed
Hosted by IBM Cloud
OpenShift
Air-gapped OpenShift
Disconnected
Zero data egress, FIPS-ready
OpenShift
OpenShift Virtualization
Self-managed
VMware migration target
OpenShift
Single Node OpenShift
Edge
SNO for branch and edge sites
A 30-minute assessment: your OpenShift, your data sources, your top three use cases.
Leave with an architecture, a sizing, and a deployment plan.
