Be Specific About Data Sources
Good: "Pull open issues from the github.com/acme/api repository"
Bad: "Get issues from GitHub"
Transform plain English descriptions into production-ready AI agents with intelligent parsing, automatic policy detection, and multi-language code generation.
Just describe what you want your agent to do, and AgentOS handles the restβextracting data sources, detecting schedules, recommending policies, and generating clean, type-safe code.
The @agentos create command is your gateway to building agents from natural language. It uses advanced NLP to understand your intent and generates complete, deployable agent code.
@agentos create [description]
Where [description] is a natural language description of what you want your agent to do.
| Option | Description | Example |
|---|---|---|
--lang |
Target programming language | --lang typescript |
--policy |
Specify compliance framework | --policy gdpr |
--output |
Output directory for generated files | --output ./agents/ |
--template |
Base template to extend | --template data-processor |
--interactive |
Enable multi-turn clarification | --interactive |
--dry-run |
Preview without generating files | --dry-run |
--interactive for complex agents. AgentOS will ask clarifying questions to ensure the generated agent matches your exact needs.
The NL parser is the brain behind agent creation. It analyzes your description and extracts key components to build your agent.
Automatically generates a meaningful name from your description, or extracts it if you specify one.
"Create a sales report generator"
β SalesReportGeneratorAgent
Identifies where your agent needs to read data fromβSlack, GitHub, Jira, databases, APIs, and more.
"Pull issues from GitHub"
β source: github.issues
Determines where results should be sentβSlack channels, email, databases, webhooks, or files.
"send summary to #reports"
β destination: slack.channel
Parses timing expressions and converts them to cron schedules or event triggers.
"every Monday at 9am"
β cron: "0 9 * * 1"
Breaks complex tasks into sequential or parallel sub-tasks with proper data flow.
"fetch, analyze, report"
β [fetch] β [analyze] β [report]
Infers required permissions and generates minimal-privilege access configurations.
"read customer database"
β db.customers: read-only
| Category | Sources | Trigger Keywords |
|---|---|---|
| Communication | Slack, Microsoft Teams, Discord, Email | messages, channels, threads, inbox |
| Development | GitHub, GitLab, Bitbucket, Azure DevOps | issues, PRs, commits, repositories |
| Project Management | Jira, Asana, Linear, Trello, Monday | tickets, tasks, sprints, projects |
| Databases | PostgreSQL, MySQL, MongoDB, Redis | database, table, collection, records |
| APIs | REST, GraphQL, gRPC, WebSocket | API, endpoint, webhook, service |
| Storage | S3, Azure Blob, GCS, local files | files, bucket, storage, upload |
AgentOS analyzes your agent's data access patterns and automatically recommends appropriate security policies and compliance frameworks.
| Framework | Auto-Detected When | Key Protections |
|---|---|---|
gdpr |
EU customer data, email, names | Consent tracking, right to deletion, data portability |
hipaa |
Health records, medical data | PHI encryption, access controls, audit trails |
soc2 |
Multi-tenant systems, SaaS | Access logging, change management, availability |
pci-dss |
Payment data, card numbers | Tokenization, encryption, network segmentation |
ccpa |
California consumer data | Opt-out mechanisms, disclosure requirements |
AgentOS generates production-ready code in Python, TypeScript, and Go. Each language follows idiomatic patterns and includes full type safety.
@agentos create an agent that monitors GitHub issues labeled "urgent" in my repos,
analyzes sentiment, and sends a Slack alert to #engineering if negative
"""
GitHub Urgent Issue Monitor Agent
Generated by AgentOS Copilot Extension
"""
from agent_os import KernelSpace, Policy
from agent_os.integrations import GitHubClient, SlackClient
from agent_os.analysis import SentimentAnalyzer
from typing import List, Dict, Any
import asyncio
# Initialize kernel with security policies
kernel = KernelSpace(
policy=Policy.load("standard"),
audit_logging=True
)
# Configure integrations
github = GitHubClient(scope="repo:read")
slack = SlackClient(channel="#engineering")
sentiment = SentimentAnalyzer(model="roberta-base")
@kernel.agent(
name="UrgentIssueMonitor",
schedule="*/15 * * * *", # Every 15 minutes
description="Monitors urgent GitHub issues and alerts on negative sentiment"
)
async def monitor_urgent_issues(repos: List[str]) -> Dict[str, Any]:
"""
Fetch urgent issues, analyze sentiment, and alert if negative.
Args:
repos: List of repository names to monitor
Returns:
Summary of processed issues and alerts sent
"""
alerts_sent = 0
processed = 0
for repo in repos:
# Fetch issues with "urgent" label (policy-checked)
issues = await github.issues.list(
repo=repo,
labels=["urgent"],
state="open"
)
for issue in issues:
processed += 1
# Analyze sentiment (rate-limited by policy)
analysis = await sentiment.analyze(
text=f"{issue.title} {issue.body}",
return_score=True
)
# Alert if sentiment is negative
if analysis.sentiment == "negative" and analysis.score > 0.7:
await slack.send(
text=f"β οΈ *Urgent Issue Alert*\n"
f"Repo: {repo}\n"
f"Issue: <{issue.url}|#{issue.number} {issue.title}>\n"
f"Sentiment: {analysis.sentiment} ({analysis.score:.0%})\n"
f"Author: @{issue.user.login}",
priority="high"
)
alerts_sent += 1
return {
"status": "success",
"issues_processed": processed,
"alerts_sent": alerts_sent
}
if __name__ == "__main__":
result = asyncio.run(
kernel.execute(
monitor_urgent_issues,
repos=["my-org/api", "my-org/frontend", "my-org/backend"]
)
)
print(f"Processed {result['issues_processed']} issues, sent {result['alerts_sent']} alerts")
/**
* GitHub Urgent Issue Monitor Agent
* Generated by AgentOS Copilot Extension
*/
import { KernelSpace, Policy, Agent } from '@agent-os/core';
import { GitHubClient, SlackClient } from '@agent-os/integrations';
import { SentimentAnalyzer } from '@agent-os/analysis';
interface Issue {
number: number;
title: string;
body: string;
url: string;
user: { login: string };
}
interface MonitorResult {
status: 'success' | 'error';
issuesProcessed: number;
alertsSent: number;
}
// Initialize kernel with security policies
const kernel = new KernelSpace({
policy: Policy.load('standard'),
auditLogging: true,
});
// Configure integrations
const github = new GitHubClient({ scope: 'repo:read' });
const slack = new SlackClient({ channel: '#engineering' });
const sentiment = new SentimentAnalyzer({ model: 'roberta-base' });
@Agent({
name: 'UrgentIssueMonitor',
schedule: '*/15 * * * *', // Every 15 minutes
description: 'Monitors urgent GitHub issues and alerts on negative sentiment',
})
class UrgentIssueMonitorAgent {
async execute(repos: string[]): Promise {
let alertsSent = 0;
let processed = 0;
for (const repo of repos) {
// Fetch issues with "urgent" label (policy-checked)
const issues = await github.issues.list({
repo,
labels: ['urgent'],
state: 'open',
});
for (const issue of issues) {
processed++;
// Analyze sentiment (rate-limited by policy)
const analysis = await sentiment.analyze({
text: ${issue.title} ,
returnScore: true,
});
// Alert if sentiment is negative
if (analysis.sentiment === 'negative' && analysis.score > 0.7) {
await slack.send({
text: [
'β οΈ *Urgent Issue Alert*',
Repo: \${repo}\`,
Issue: <|# >,
Sentiment: (%),
Author: @,
].join('\n'),
priority: 'high',
});
alertsSent++;
}
}
}
return {
status: 'success',
issuesProcessed: processed,
alertsSent,
};
}
}
// Execute agent
const agent = new UrgentIssueMonitorAgent();
kernel.register(agent);
kernel.execute(agent, {
repos: ['my-org/api', 'my-org/frontend', 'my-org/backend'],
}).then((result) => {
console.log(Processed issues, sent alerts);
});
// GitHub Urgent Issue Monitor Agent
// Generated by AgentOS Copilot Extension
package main
import (
"context"
"fmt"
"log"
"github.com/agent-os/sdk-go/kernel"
"github.com/agent-os/sdk-go/integrations/github"
"github.com/agent-os/sdk-go/integrations/slack"
"github.com/agent-os/sdk-go/analysis/sentiment"
)
// MonitorResult holds the execution results
type MonitorResult struct {
Status string json:"status"
IssuesProcessed int json:"issues_processed"
AlertsSent int json:"alerts_sent"
}
// UrgentIssueMonitor monitors GitHub issues for negative sentiment
type UrgentIssueMonitor struct {
kernel *kernel.KernelSpace
github *github.Client
slack *slack.Client
sentiment *sentiment.Analyzer
}
// NewUrgentIssueMonitor creates a new monitor agent
func NewUrgentIssueMonitor() *UrgentIssueMonitor {
k := kernel.New(kernel.Config{
Policy: kernel.PolicyLoad("standard"),
AuditLogging: true,
})
return &UrgentIssueMonitor{
kernel: k,
github: github.NewClient(github.WithScope("repo:read")),
slack: slack.NewClient(slack.WithChannel("#engineering")),
sentiment: sentiment.NewAnalyzer(sentiment.WithModel("roberta-base")),
}
}
// Execute runs the monitoring task
func (m *UrgentIssueMonitor) Execute(ctx context.Context, repos []string) (*MonitorResult, error) {
var alertsSent, processed int
for _, repo := range repos {
// Fetch issues with "urgent" label (policy-checked)
issues, err := m.github.Issues.List(ctx, github.IssueFilter{
Repo: repo,
Labels: []string{"urgent"},
State: "open",
})
if err != nil {
return nil, fmt.Errorf("failed to fetch issues for %s: %w", repo, err)
}
for _, issue := range issues {
processed++
// Analyze sentiment (rate-limited by policy)
analysis, err := m.sentiment.Analyze(ctx, sentiment.Input{
Text: fmt.Sprintf("%s %s", issue.Title, issue.Body),
ReturnScore: true,
})
if err != nil {
log.Printf("Sentiment analysis failed for issue #%d: %v", issue.Number, err)
continue
}
// Alert if sentiment is negative
if analysis.Sentiment == "negative" && analysis.Score > 0.7 {
msg := fmt.Sprintf(
"β οΈ *Urgent Issue Alert*\nRepo: %s\nIssue: <%s|#%d %s>\nSentiment: %s (%.0f%%)\nAuthor: @%s",
repo, issue.URL, issue.Number, issue.Title,
analysis.Sentiment, analysis.Score*100, issue.User.Login,
)
if err := m.slack.Send(ctx, slack.Message{
Text: msg,
Priority: slack.PriorityHigh,
}); err != nil {
log.Printf("Failed to send Slack alert: %v", err)
continue
}
alertsSent++
}
}
}
return &MonitorResult{
Status: "success",
IssuesProcessed: processed,
AlertsSent: alertsSent,
}, nil
}
func main() {
monitor := NewUrgentIssueMonitor()
repos := []string{"my-org/api", "my-org/frontend", "my-org/backend"}
result, err := monitor.Execute(context.Background(), repos)
if err != nil {
log.Fatalf("Monitor execution failed: %v", err)
}
fmt.Printf("Processed %d issues, sent %d alerts\n",
result.IssuesProcessed, result.AlertsSent)
}
Let's walk through creating different types of agents with realistic prompts and responses.
An agent that aggregates sales data from multiple sources and generates reports.
An agent that watches system metrics and triggers alerts.
An agent that handles customer inquiries using AI and escalates when needed.
Generated code is just a starting point. Here's how to customize it effectively.
Every generated agent includes a config.yaml that you can modify without touching code:
# config.yaml - Customize your agent without code changes
agent:
name: UrgentIssueMonitor
version: "1.0.0"
schedule:
cron: "*/15 * * * *" # Adjust frequency
timezone: "America/New_York"
sources:
github:
repos:
- "my-org/api"
- "my-org/frontend" # Add more repos here
labels:
- "urgent"
- "critical" # Add more labels
thresholds:
sentiment_score: 0.7 # Adjust sensitivity
max_alerts_per_hour: 10 # Rate limiting
notifications:
slack:
channel: "#engineering"
mention_on_critical: "@oncall"
1 Add custom processing logic
@kernel.middleware
async def custom_enrichment(issue, next):
"""Add team assignment based on issue labels"""
issue.assigned_team = determine_team(issue.labels)
return await next(issue)
2 Add new integrations
# Add Jira integration for issue mirroring
from agent_os.integrations import JiraClient
jira = JiraClient(project="ENG")
# In your agent function:
await jira.create_issue(
summary=f"[GitHub] {issue.title}",
priority="High" if analysis.score > 0.8 else "Medium"
)
3 Customize policies
# policy.yaml - Fine-tune security rules
policies:
rate_limiting:
llm_calls_per_minute: 30 # Increase for high volume
api_calls_per_minute: 100
data_access:
github:
allowed_repos: ["my-org/*"]
denied_actions: ["delete", "admin"]
output_restrictions:
slack:
allowed_channels: ["#engineering", "#alerts"]
require_approval_for: ["@channel", "@here"]
For complex agents, AgentOS uses multi-turn conversations to gather requirements and clarify ambiguities.
--interactive to always enable clarifying questions, or --auto to have AgentOS make reasonable defaults without asking.
Write better descriptions to get better agents. Follow these guidelines:
Good: "Pull open issues from the github.com/acme/api repository"
Bad: "Get issues from GitHub"
Good: "Run every weekday at 9am EST"
Bad: "Run regularly"
Good: "Send summary to #sales-reports Slack channel and email to team@company.com"
Bad: "Notify the team"
Good: "Alert if response time exceeds 500ms for more than 5 minutes"
Bad: "Alert if slow"
Good: "Process EU customer data with GDPR compliance"
Bad: "Process customer data"
Good: "Retry 3 times on failure, then alert DevOps"
Bad: "Handle errors"
Use these templates as starting points for common agent types:
# Data Pipeline Agent
@agentos create an agent that:
- Pulls [DATA_TYPE] from [SOURCE] every [SCHEDULE]
- Transforms data by [OPERATIONS]
- Loads results into [DESTINATION]
- Alerts [TEAM/CHANNEL] on failures
# Monitoring Agent
@agentos create an agent that:
- Monitors [METRICS] from [SOURCES]
- Triggers alerts when [CONDITIONS]
- Sends notifications to [DESTINATIONS]
- Runs every [INTERVAL]
# Automation Agent
@agentos create an agent that:
- Watches for [TRIGGER_EVENT] in [SOURCE]
- Performs [ACTIONS] automatically
- Logs results to [DESTINATION]
- Escalates to [PERSON/TEAM] when [CONDITIONS]
Now that you know how to create agents from natural language, explore these related topics: