Koovis Workforce

Security, trust, and the Trust Ladder.

Autonomous agents with access to your code, data, and accounts need to be safe by default. Here's how we think about it, and the mechanisms we build to keep blast radius low.

The Trust Ladder

Trust is earned, not granted.

Agent autonomy is a ladder, not a switch. Every agent starts at L0. You grant scope incrementally as you see work you trust. Sensitive actions always gate, regardless of level.

L0 Propose

Every agent action is a proposal that requires human approval before execution. Default for new users and sensitive workflows.

L1 Scoped

Agent executes within explicit scopes you define (specific repos, specific projects, specific tools). Actions outside scope still require approval.

L2 Autonomous-within-scope

Agent executes autonomously within earned scopes. Sensitive actions still gate; routine tasks don't.

L3 Hands-off

Maximum autonomy on approved workflows. Sensitive actions always gate regardless of level. Reserved for mature workflows with track records.

Security pillars

Six concrete mechanisms.

Data handling

Your conversations, workspace content, and agent actions stay yours. We don't train models on your data. Each LLM provider's privacy terms apply to the messages they process. Self-hosted deployment keeps everything inside your VPC.

Approval gates on sensitive actions

Regardless of Trust Ladder level, actions with irreversible or externally-visible consequences always require explicit approval. This includes: git push to main, database deletions, external email sends, payments, and anything that crosses a system boundary.

Reversibility by default

Agent actions are designed to be reversible where possible. Code changes land as PRs (reject to undo), file operations can be rolled back, and memory mutations are versioned. The blast radius of a bad action is a PR you close, not production downtime.

Audit log

Every agent action is recorded — who triggered it, what the agent did, which LLM provider ran the inference, how much it cost, and what the outcome was. Logs are queryable and exportable. Retention configurable per plan.

Bring your own keys

You can bring your own API keys for Claude, OpenAI, Gemini, Bedrock, and DeepSeek. Your keys, your inference costs, your privacy relationship with each provider. We pass requests through without intermediate storage.

Self-hosted option

The engine is MIT-licensed. Run it on your own infrastructure — AWS, GCP, or your own data center. For enterprise customers who need regulatory control, the hosted product will also ship a VPC deployment option.

Compliance roadmap

Honest about what we do and don't have.

We don't yet hold formal certifications. We're early, and we'd rather be clear about that than wave badges we haven't earned. Here's where we stand and where we're headed.

Today

Self-hosted deployment via MIT-licensed engine — run it on your own infra with full control.

Today

Bring-your-own-keys for all five LLM providers — your inference relationship, not ours.

Today

Complete audit log of agent actions, queryable and exportable.

Q4 2026

Hosted VPC deployment option for enterprise customers with data-residency requirements.

2027

SOC 2 Type II audit — target Q2 2027, gated on enterprise customer demand.

2027+

ISO 27001 — evaluated based on EU enterprise customer pipeline.

Not planned

FedRAMP, HIPAA, PCI-DSS. If you need these, self-hosted may be the right fit.

Incident response

Report a vulnerability.

Found a security issue? We'd rather hear about it from you than read about it in the news.

We acknowledge within 48 hours and aim to ship fixes within 14 days for high-severity issues.

For vulnerabilities in the open-source engine, use GitHub's private security advisory flow.

Questions before you evaluate?

We answer security questions on a call. No security questionnaire template refusals — tell us what you need to know and we'll respond in plain English.