Why We Built a Unified DSL for Risk Intelligence

One declarative language for features, rules, and decisions - the technical philosophy behind Corint AI

10 min read

When we set out to build Corint AI, we faced a fundamental question: How do you express risk intelligence in a way that's simultaneously human-readable, machine-executable, and AI-friendly?

The answer led us to create a unified Domain-Specific Language (DSL) that reimagines how risk decisions are defined, deployed, and evolved.

The Fragmentation Problem

Traditional risk platforms suffer from what we call "language fragmentation":

The Status Quo

  • Features are defined in SQL or Python ETL scripts
  • Rules are written in Java, Scala, or proprietary rule engines
  • Models are trained in notebooks and exported as black boxes
  • Decision logic is scattered across microservices in different languages

This fragmentation creates cascading problems:

Knowledge Silos

Data scientists, risk analysts, and engineers work in separate worlds. Collaboration requires constant translation between languages.

Slow Iteration

Changing a feature requires modifying SQL, updating feature pipelines, retraining models, and redeploying services. Simple updates take days.

AI Blindness

LLMs can't reason over fragmented codebases. They can't automatically generate strategies when logic is scattered across SQL, Java, and Python.

The Unified DSL Vision

We asked: What if there was one language to express everything risk-related?

Corint DSL: One Language for Everything

  • Features - Define data transformations declaratively
  • Rules - Express conditions and thresholds clearly
  • Decisions - Compose logic with transparent reasoning
  • Orchestration - Chain decisions into complex workflows

A single, declarative syntax means everyone—from risk analysts to AI agents—speaks the same language.

Design Principles

Our DSL is built on five core principles:

1. Declarative, Not Imperative

Describe what you want to check, not how to check it. The engine handles optimization and execution.

# Declarative - Say what you want
rule:
  id: high_velocity
  when:
    all:
      - transaction_count_1h > 10
      - total_amount_1h > 5000
  score: 75

# vs Imperative - Say how to do it
def check_velocity(user_id):
    count = db.query("SELECT COUNT(*) ...")
    amount = db.query("SELECT SUM(*) ...")
    if count > 10 and amount > 5000:
        return 75
    return 0

2. Human-Readable

Risk analysts should be able to read and understand strategies without developer knowledge. No cryptic syntax.

- name: user_risk_level
  type: expression
  expression: "case when credit_score < 600 then 'high' when credit_score < 700 then 'medium' else 'low' end"

No magic. Anyone can understand what this does.

3. LLM-Friendly

AI agents need to generate, modify, and optimize strategies autonomously. The DSL syntax is designed to be easily understood and produced by LLMs.

When you describe a risk scenario in natural language—"Block transactions from new countries if velocity is high"—the AI can generate corresponding DSL code instantly.

4. Portable and Versionable

DSL strategies are plain text files. Store them in Git. Roll back. Diff changes. Share templates. Deploy across environments.

This is how infrastructure-as-code revolutionized DevOps. We're doing the same for risk intelligence.

5. Business-User Empowered

Risk analysts and business personnel can adjust strategies directly without developer involvement. Change thresholds, add conditions, or modify decision logic by editing YAML—no coding required.

# Risk analyst adjusting fraud threshold
conclusion:
  - when: total_score >= 80  # Changed from 100
    signal: decline
  - when: total_score >= 50  # Changed from 60
    signal: review

Update YAML, push to production—no waiting for dev cycles. Business responds to fraud patterns in hours, not weeks.

A Real-World Example

Let's see how the unified DSL works in practice. Here's a complete fraud detection strategy:

# Define features
features:
  - name: transaction_count_24h
    description: "Count of transactions in last 24 hours"
    type: aggregation
    method: count
    datasource: postgresql_events
    entity: events
    dimension: user_id
    dimension_value: "{event.user_id}"
    window: 24h
    when: type == "transaction"

  - name: total_amount_24h
    description: "Total transaction amount in last 24 hours"
    type: aggregation
    method: sum
    datasource: postgresql_events
    entity: events
    dimension: user_id
    dimension_value: "{event.user_id}"
    field: amount
    window: 24h
    when: type == "transaction"

  - name: country_changed
    description: "Whether user's country changed recently"
    type: expression
    expression: "event.country != event.previous_country"

# Define rules
- rule:
    id: velocity_check
    name: "High Velocity Detection"
    description: "Flag high-velocity users"
    when:
      all:
        - features.transaction_count_24h > 20
        - features.total_amount_24h > 10000
    score: 60

- rule:
    id: country_anomaly
    name: "Country Change Detection"
    description: "Flag sudden country changes"
    when:
      all:
        - features.country_changed == true
        - event.account_age_days < 30
    score: 80

# Group rules into ruleset
ruleset:
  id: fraud_detection
  name: "Transaction Fraud Detection"
  rules:
    - velocity_check
    - country_anomaly
  conclusion:
    - when: total_score >= 100
      signal: decline
      reason: "High fraud risk detected"
    - when: total_score >= 60
      signal: review
      reason: "Medium risk - needs review"
    - default: true
      signal: approve
      reason: "No significant risk"

# Orchestrate with pipeline
pipeline:
  id: transaction_processing
  name: "Transaction Processing Pipeline"
  entry: extract_features

  steps:
    - step:
        id: extract_features
        name: "Extract Features"
        type: features
        features:
          - transaction_count_24h
          - total_amount_24h
          - country_changed
        next: run_fraud_check

    - step:
        id: run_fraud_check
        name: "Run Fraud Detection"
        type: ruleset
        ruleset: fraud_detection

  decision:
    - when: results.fraud_detection.signal == "decline"
      result: decline
      reason: "{results.fraud_detection.reason}"
    - when: results.fraud_detection.signal == "review"
      result: review
      reason: "{results.fraud_detection.reason}"
    - default: true
      result: approve
      reason: "Transaction approved"

Notice what just happened - a complete risk processing flow in one file:

  • 1.Features - Database aggregations and computations abstracted into declarative definitions
  • 2.Rules - Pattern detection logic that references features with features. prefix
  • 3.Ruleset - Groups rules together and produces signals (approve, decline, review) via conclusion logic
  • 4.Pipeline - Orchestrates the entire flow: extract features → run ruleset → make final decision

Key benefits:

  • No SQL - database queries abstracted via datasource, dimension, window
  • No Python/Java - everything expressed in unified YAML DSL
  • No deployment scripts - the pipeline is the deployment unit
  • Explainability built-in - every signal and decision includes a reason

Update this YAML file, push to Git, and the strategy updates in real-time—no code deployment, no service restarts.

The AI Multiplier Effect

Here's where it gets powerful. Because the DSL is LLM-friendly, AI agents can:

Auto-generate strategies from natural language

"Detect users who make multiple small transactions followed by a large withdrawal" → DSL code generated in seconds

Analyze data and suggest features

AI examines your transaction logs and recommends new features like "time_since_last_login" or "device_fingerprint_changes"

Optimize thresholds automatically

Run A/B tests, analyze false positive rates, and adjust rule parameters autonomously

Explain decisions to users

Because the DSL is declarative, the AI can generate human-readable explanations: "Blocked because transaction velocity (25) exceeded threshold (20) and country changed"

"The DSL is the bridge between human intent and machine execution. It's where AI agents translate business needs into production-ready decisions."

Why This Matters for SMBs

Traditional risk platforms require teams of specialists:

  • Data engineers to build feature pipelines
  • Risk analysts to design strategies
  • ML engineers to train models
  • Backend developers to implement and deploy

With a unified DSL + AI agents, one technical person can:

  • Describe needs in plain English
  • Let AI generate the DSL strategy
  • Review and customize the output
  • Deploy with a Git push

This is how SMBs compete with enterprises. Not by hiring the same expensive teams—but by leveraging unified languages and AI automation.

The Open Source Advantage

Because Corint DSL is open source, the community can:

Share Strategy Templates

Build once, use everywhere. A payment fraud template created by one team can be reused by thousands of others.

Extend the Language

Add domain-specific functions. Contribute new feature types. The DSL evolves with the community's needs.

Build Custom Tooling

Create IDE plugins, visualization tools, or testing frameworks. The DSL is the foundation for an ecosystem.

Audit and Trust

Open source means transparency. See exactly how decisions are made. No black boxes.

Conclusion: Language as Infrastructure

We believe that language is infrastructure.

Just as SQL became the universal language for databases, and Kubernetes YAML for container orchestration, we envision the Corint DSL as the universal language for risk intelligence.

A unified DSL:

  • Eliminates fragmentation
  • Accelerates iteration
  • Enables AI automation
  • Empowers non-experts
  • Fosters open collaboration

This is why we built it. And why we made it open source.

See the DSL in Action

Explore the Corint DSL syntax, browse strategy templates, and start building your first risk detection rule in minutes.