Our principles
Five principles shape how we build and how we ask our customers to deploy:
- Human accountability over autonomy. AI systems on Katonic are designed to support decisions, not replace the people legally accountable for them. Every workflow can require human review at decision points.
- Transparency over opacity. Every output is traceable. Audit log entries cover prompt, retrieved context, model, parameters, output, evaluator score, and reviewer. Sovereign deployments give customers the source of truth, not us.
- Customer control over vendor lock-in. Customers own their data, their fine-tuned models, their evaluation datasets. Open-source models, open-source frameworks, customer-controlled keys, exportable artifacts.
- Risk-proportionate controls. Guardrails, evaluators, and review thresholds scale with the risk of the workflow. A marketing draft has different controls than a treatment recommendation.
- Honest reporting. Where models fail or behave unexpectedly, we tell customers. Where we're not certified for a use case, we say so. Where the platform has limits, we publish them.
How we implement these principles
Human-in-the-loop by default
Every workflow built in Studio or AI Builder supports human review nodes - approval gates, side-by-side comparison, annotation, escalation paths. High-risk workflows (regulated decisions, customer-facing actions, financial transactions) default to review-required.
Guardrails
Eleven guardrail rail types are available across input, output, retrieval, and tool execution: PII detection, jailbreak detection, prompt injection detection, topic restriction, content moderation, factuality verification, response coherence, bias detection, rate limiting, cost controls, and policy enforcement. Customers configure rails per workflow with risk-tiered defaults.
Evaluation before production
The platform ships with twelve individual evaluators and five evaluation suites. Customers run agents against evaluation datasets before promoting to production, with regression detection on every change. Rollback is built in.
Audit log and observability
Every interaction generates an immutable audit log entry. Audit logs are retained per tier (90 days, 7 years, indefinite) and exportable for compliance review. The same audit log feeds compliance evidence packs.
Data minimization
The Knowledge Engine supports tenant isolation, document-level access control, and field-level redaction. Models are scoped to the data the user is authorized to see, not the data the deployment contains.
Model choice and substitution
Customers choose the model (proprietary, open-weight, fine-tuned, distilled) for each workflow. Routing policies select the right model for the right job. If a model behaves unexpectedly, customers can substitute a different model without rebuilding the workflow.
What we won't help build
Katonic's platform is general-purpose AI infrastructure. Customers determine specific applications. That said, we will not knowingly support deployments designed to:
- Conduct mass surveillance of populations not subject to lawful warrant
- Generate or distribute child sexual abuse material or non-consensual intimate imagery
- Create or operate autonomous weapons systems
- Manipulate elections through coordinated inauthentic behavior
- Discriminate unlawfully in employment, lending, housing, or essential services
- Bypass safety controls in regulated industries (medical diagnosis without clinician oversight, legal advice without lawyer oversight, treatment recommendations without provider oversight)
These are baseline restrictions in our acceptable use policy. We may decline business that does not meet them.
EU AI Act alignment
The EU AI Act categorizes AI systems by risk. The platform is designed to support customer compliance for systems classified as high-risk under Article 6 (deployments listed in Annex III: critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democratic processes).
We support customer obligations under:
- Article 9 - Risk management system: risk register, mitigation tracking, residual risk attestation
- Article 10 - Data governance: training data documentation, bias testing, lineage tracking
- Article 11 - Technical documentation: auto-generated from configuration plus customer attestation
- Article 12 - Record keeping: immutable audit log with structured event taxonomy
- Article 13 - Transparency: instruction-for-use documentation, capability and limitation disclosure
- Article 14 - Human oversight: review nodes, escalation paths, override capability
- Article 15 - Accuracy, robustness, cybersecurity: evaluator suites, adversarial testing, security controls
Customers using Katonic for high-risk applications under the EU AI Act sign a conformity package that maps their deployment configuration to specific articles. Sovereign deployments include conformity assessment as part of the implementation engagement.
Bias and fairness
Bias enters AI systems at multiple points: training data, fine-tuning data, retrieval data, prompts, evaluation criteria. We address each:
- Training data: We do not train foundation models. Customers select models from providers whose data sourcing they accept (commercial: Anthropic, OpenAI, Google, Mistral, Cohere; open-weight: Llama, Qwen, DeepSeek; sovereign-hosted: customer choice).
- Fine-tuning data: Customer-provided. We support data quality assessment, deduplication, and bias testing pre-fine-tuning. Customers attest to the lawfulness of their data sources.
- Retrieval data: Customer-controlled. The Knowledge Engine supports access controls, freshness rules, and source filtering.
- Evaluation: Demographic parity, equal opportunity, and predictive equality metrics are available as evaluators. Customers configure which to apply per use case.
- Reporting: Compliance reports can include bias test results across the dimensions the customer evaluates.
Explainability and traceability
Generative AI is not fully interpretable. We do not claim otherwise. What we do provide:
- Full traceability of inputs that produced an output (prompt, retrieved context, tool calls, model parameters)
- Confidence indicators where the model exposes them
- Citations to retrieved source documents in RAG workflows
- Policy-attribution for guardrail decisions (which rail blocked which output)
- Audit log entries that allow reconstruction of any past interaction
Children
The platform is intended for B2B use by enterprise and government customers. We do not knowingly support deployments that produce content directed to children under 16 or that collect personal information from children under 16, except where the deployment supports a legitimate educational, healthcare, or government program with appropriate safeguards.
Reporting issues
If you encounter a model output, behavior, or platform decision that you believe violates these principles, or that may cause harm, we want to know:
- Customers: contact your CSM or open a ticket with severity flag
- End users of customer deployments: contact the customer's privacy or compliance team in the first instance; if unresolved, escalate to
responsibleai@katonic.ai - Researchers and the public: report concerns to
responsibleai@katonic.aifor review by our AI Ethics Committee - Security vulnerabilities:
support@katonic.ai- separate from this channel
We acknowledge reports within 5 business days and provide a substantive response within 30 days. Material incidents trigger customer notification through our standard incident response process.
Governance
An internal AI Ethics Committee meets monthly to review:
- New high-risk customer deployments before contract signature
- Model additions to the AI Gateway
- Reported incidents and platform behavior issues
- Changes to the Acceptable Use Policy and this page
The Committee includes engineering leadership, security and compliance, customer-facing functions, and rotating external advisors. Material decisions are documented and made available to customers under their security review.
Changes to this page
We update this page when our practices change. Material changes are communicated to active customers through their CSM at least 30 days before taking effect.
Contact
- General responsible AI inquiries:
responsibleai@katonic.ai - Privacy and data subject rights:
support@katonic.ai - Security:
support@katonic.ai
