Agent skills

Standr.

Professional judgment for AI agents

AI agents generate correct output. The problem is what they don’t generate: consistent naming, navigable architecture, arguments that land. Standr gives your agent the domain judgment to close that gap.

Works with Claude Code Claude Code OpenAI Codex CLI OpenAI Codex CLI Gemini CLI Gemini CLI OpenCode OpenCode Pi Pi Cursor Cursor GitHub Copilot GitHub Copilot Antigravity Antigravity Kiro Kiro
standr/refactor
without standr with standr
def add_customer(db, payload):
    return db.create(payload)

def find_customer(db, customer_id):
    return db.read(customer_id)

def build_order_total(order):
    if order is None:
        raise ValueError("order is required")
    if not hasattr(order, "items"):
        raise ValueError("order.items is required")
    total = 0.0
    for item in order.items:
        if item is not None and hasattr(item, "price"):
            total += item.price
    return total
def create_customer(db, payload):
    return db.create(payload)

def read_customer(db, customer_id):
    return db.read(customer_id)

def build_order_total(order: Order) -> float:
    return sum(item.price for item in order.items)

class Item:
    price: float

class Order:
    items: list[Item]

AI output has a smell

You know it when you see it. Five endpoints that each invented their own conventions. An architecture that’s technically layered but structurally flat. A pitch deck that lists facts without ever landing an argument.

The output is correct. It just doesn’t look like someone cared.

Skills, not prompt packs

Prompt pack

A static text file. Front-loads instructions and hopes the model follows them. Quality varies with every run. Nothing adapts to your codebase.

standr skill

An executable workflow with judgment encoded. Reads your code, understands context, chains decisions, and makes targeted, defensible changes.

Install

Choose your method.

$ npx standr install

The best AI output is the kind nobody recognizes as AI.