Commens
Product

A shared, authoritative knowledge layer for enterprise AI.

Commens turns scattered enterprise context into agent-ready knowledge: curated, structured, permissioned, versioned, and continuously improved through feedback and real usage. It sits upstream of models, agents, and orchestration, shaping and governing the knowledge they all depend on.

Architecture position

Models generate. Agents execute. Commens shapes the intelligence all of these depend on.

When an orchestration layer asks what an agent should be allowed to do, the answer comes from Commens. When an agent needs organizational context, constraints, or policy, that knowledge comes from Commens. When a model needs to act within bounds, those bounds are encoded in the authoritative knowledge Commens provides.

Interfaces
Chat IDE Workflow apps Copilots
Agents
Research Review Ops Custom
Orchestration
Runtime Tool use Routing
Commens — authoritative knowledge layer

Memory. Policy. Identity. Usage. Collaborative oversight. One shared, reviewable source of record.

Models
OpenAI Anthropic Google Meta Mistral
Enterprise systems
IAM Policy engines Data sources Inventory
Core capabilities

Seven things AI needs to know, delivered as one layer.

Authoritative context

AI needs to know

Goals, preferences, constraints, history

Structured knowledge store — curated, versioned, and trusted as the source of record for every agent that needs to reason about your organization.

Policy as knowledge

AI needs to know

Rules, boundaries, compliance, organizational constraints

Policy encoded in context so agents internalize bounds rather than hitting gates after the decision has already been attempted.

Feedback intelligence

AI needs to know

What worked, what failed, how to improve

Structured feedback loops that turn experience into durable knowledge — reviewable, auditable, and reusable across teams.

Usage-driven curation

AI needs to know

Tacit context, decisions, and review artifacts revealed while completing work

Capture and distill real human-AI interactions, approvals, exceptions, and rationales into reusable knowledge and precedents.

Collaborative oversight

AI needs to know

Reviews, approvals, exceptions, escalations, rationales

A shared oversight layer where teams review context, approve exceptions, refine policy, and turn review decisions into reusable artifacts instead of one-off thread replies.

Organizational identity

AI needs to know

Who is acting, what they are permitted, what applies to them

Identity and permissions treated as context that shapes behavior upstream, not as a separate access-control layer bolted on after the fact.

Cross-system awareness

AI needs to know

Context that spans tools, agents, and workflows

Knowledge governance that extends across the organizational surface, so context does not fragment back into per-tool silos.

The knowledge store

Not a repository. A curation system.

Commens normalizes scattered inputs, preserves provenance, encodes permissions, maintains freshness, and turns successful use into reusable intelligence.

Memory

Curated context — goals, preferences, history, institutional knowledge. Not raw data. Not documents. Not prompts. Structured, evolving, authoritative knowledge.

Policy

Organizational rules, compliance requirements, constraints, and boundaries encoded as knowledge that agents internalize. When AI knows your policies, it does not need a gate to stop it — it acts within bounds because the bounds are part of what it knows.

Identity & permissions

Who is acting, what they are authorized to access, what boundaries apply. Not just access controls — context that shapes behavior upstream of the decision.

Usage traces

Task-level interactions, clarifications, and resolved ambiguity — together with the approvals, exceptions, escalations, and recorded rationales that sit alongside them. First-class inputs, not logs.

Collaborative oversight

Where teams review, approve, and refine — and the decisions become durable.

Oversight is an operating mechanism, not a philosophical layer. Teams review the context agents act on. Stakeholders approve exceptions and escalate edge cases. Subject matter experts refine policy. Every non-routine decision — the approval, the exception, the rationale, the precedent — is captured, versioned, and made reusable, so the next similar case inherits the judgment of the last one.

Step 1

Interaction

An agent or team completes real work in context.

Step 2

Review

Humans inspect the context, decision, and outcome.

Step 3

Decision

Approval, exception, or refinement is recorded with rationale.

Step 4

Precedent

The decision becomes a reusable artifact the next case inherits.

What Commens is not

An honest product boundary beats a vague one.

Commens works alongside IAM, runtime policy engines, orchestration platforms, and AI inventory tools. It does not try to replace them. It sits upstream and shapes the knowledge, policy context, permissions context, and review artifacts they rely on.

  • Not IAM
    Works with IAM — feeding it the permissions context that shapes what an actor can know.
  • Not a runtime policy engine
    Works with runtime policy — giving gates the authoritative rules and rationale they enforce.
  • Not an orchestration platform
    Works with orchestration — supplying the memory and policy agents draw on during execution.
  • Not an AI inventory tool
    Works with inventory — turning it into reviewable knowledge about where AI is actually used.

Better-shaped, better-curated knowledge improves judgment, execution, learning, trust, and control at once — not as a tradeoff but as the same motion.

The sanctioned path has to be better than the ad hoc one. That is the bar Commens is designed to meet.