🔧 Workflow Designer
Build AI agent workflows visually with drag-and-drop. No coding required.
Open Workflow Designer
Ctrl+Shift+P → "Agent OS: Open Workflow Designer"
Node Types
Action
Execute operations
Condition
Branch logic
Loop
Repeat actions
Parallel
Concurrent execution
Action Types
When you place an Action node, configure its type in the properties panel:
| Type | Description |
|---|---|
file_read | Read a file from disk |
file_write | Write data to a file |
http_request | Make HTTP API call |
database_query | Query database |
database_write | Write to database |
llm_call | Call LLM API |
send_email | Send email notification |
code_execution | Execute code snippet |
Building a Workflow
1 Drag nodes from the left panel onto the canvas
2 Connect nodes by dragging from output port (right) to input port (left)
3 Configure nodes by clicking them and editing properties
4 Attach policies to individual nodes for fine-grained control
5 Export code by selecting Python, TypeScript, or Go
Example: Data Processing Pipeline
Build a workflow that reads data, processes with LLM, and writes output:
- Drag Action → Set type:
file_read - Drag Action → Set type:
llm_call - Drag Action → Set type:
file_write - Connect: file_read → llm_call → file_write
- Attach "strict" policy to file_write
- Click "Export" → "Python"
Generated Python Code
from agent_os import KernelSpace, Policy
from agent_os.tools import create_safe_toolkit
kernel = KernelSpace(policy="strict")
toolkit = create_safe_toolkit("standard")
async def file_read(context):
"""Read input file"""
return await toolkit.file.read(context["input_path"])
async def llm_call(context):
"""Process with LLM"""
return await toolkit.llm.call(model="gpt-4", prompt=context["data"])
async def file_write(context):
"""Write output file"""
# Policy: strict
return await toolkit.file.write("/tmp/output.json", context["result"])
@kernel.register
async def run_workflow(task: str):
context = {"task": task, "input_path": "/data/input.csv"}
context["data"] = await file_read(context)
context["result"] = await llm_call(context)
await file_write(context)
return {"status": "success"}
Toolbar Actions
| Button | Action |
|---|---|
| ▶️ Simulate | Dry-run the workflow |
| 💾 Save | Save workflow to JSON |
| 📂 Load | Load existing workflow |
| 📤 Export | Generate code in selected language |