Workflows Tutorial
Goal: Open Dify, verify seeded workflows, and run one through chat.
Prerequisites: Quickstart completed. Dify is running and has an OpenAI-compatible provider pointed at the AI-in-a-Box inference router.
Time: About 10 minutes.
1. Open Dify
Open http://localhost:3001. The default admin email is
admin@aibox.local; the password is generated into deploy/.env by
make bootstrap as DIFY_ADMIN_PASSWORD.
2. Check the Provider
In Settings > Model Provider, verify that the OpenAI-compatible provider is configured with:
| Field | Value |
|---|---|
| Endpoint URL | http://inference-router:8004/v1 |
| API key | Any non-empty value |
| Model | openai/gpt-5.4-nano for the seeded workflows |
| Mode | Chat |
This keeps Dify traffic inside the inference router so usage, routing, and observability stay consistent.
3. Review Seeded Workflows
Go to Studio and look for:
- Executive Summary
- Email Reply Drafter
- Meeting → Action Items
- Code Reviewer
- SQL Query Explainer
- Changelog Generator
- User Story Expander
- Incident Root Cause (5 Whys)
- Email Triage + Reply
- CSV Analyst
- Structured Extractor
If they are missing, check the dify-seed container logs or run the seed script
with the Dify admin password from deploy/.env.
4. Test a Workflow in Dify
Open Executive Summary, click Test Run, paste a paragraph, and run it. The LLM node should call the inference router through the OpenAI-compatible plugin.
5. Run It From Chat
Open http://localhost and ask:
Run the Executive Summary workflow on this paragraph: ...
When DIFY_URL and a workflow API key are configured, the agent runtime exposes
the run_workflow tool. The tool call timeline shows the workflow name, inputs,
and returned output.
Workflows vs. Subagents
Dify workflows are best for predictable pipelines that operators can inspect in a visual editor. Subagents are better for independent open-ended reasoning work. See Agents Reference for the delegation model.
Next Steps
Read the Dify Workflows Reference for runtime wiring, configuration, and troubleshooting.