Agents in Workflows

Use AI agents as nodes inside visual workflows to handle open-ended reasoning steps.

Overview

A91I lets you embed AI agent logic directly inside a workflow using the AI Prompt node. While a dedicated "Agent Node" that calls a named agent is on the roadmap, the AI Prompt node already gives you agent-like reasoning — Claude processes the input data, executes a task described in the prompt, and produces structured output that flows into the next nodes.

For tasks that require multi-turn dialogue or interactive human input, use standalone agents (chat interface). For automated, event-driven pipelines that include an AI reasoning step, use the AI Prompt node in a workflow.

When to use each approach

Workflow AI Prompt node — one-shot AI reasoning inside an automated pipeline (e.g., classify a support ticket, summarize a document, extract structured data). No human in the loop.

Standalone Agent — interactive, multi-turn conversations where a human is chatting with the agent and the agent iteratively calls tools.

Using the AI Prompt Node

Adding an AI Prompt Node

1

Open the workflow editor

Open an existing workflow or create a new one.
2

Add the AI Prompt node

In the node library sidebar, search for AI Prompt and drag it onto the canvas. Connect it to an upstream node whose output you want the AI to process.
3

Configure the prompt

In the node configuration panel, write a prompt that describes what you want Claude to do. Reference upstream data using template variables like {{trigger.body.email}} or {{gmail_read.subject}}.
4

Set the model (optional)

Choose the Claude model. Defaults to claude-sonnet-4-6.
5

Connect the output

The AI Prompt node outputs a response field containing the model's text output. Use it in downstream nodes via the data mapper.

Prompt Templates with Workflow Data

You can inject data from any previous node into the AI prompt using double-brace template syntax:

Classify the following support ticket into one of these categories:
- billing
- technical
- account
- feature-request
- other

Ticket subject: {{gmail_read.subject}}
Ticket body:
{{gmail_read.body}}

Respond with only the category name, lowercase.

The output (e.g., billing) can then be used in a Condition node to route the workflow to the right Jira project or Slack channel.

Structured Output

For workflows that need the AI to return structured data (not just free text), instruct the model to respond in JSON and use the Transform node downstream to parse and extract specific fields.

Extract the following fields from the email below and return them as JSON:
- sender_name
- urgency (low, medium, high)
- action_required (true/false)
- summary (1 sentence)

Email:
{{gmail_read.body}}

Return only valid JSON, no other text.

Common Agent-in-Workflow Patterns

Triage & Routing

  • Trigger: incoming email or form submission.
  • AI Prompt: classify the category and urgency.
  • Condition: branch based on the classification.
  • Action: create a Jira ticket in the appropriate project, notify the right Slack channel.

Summarization Pipeline

  • Trigger: schedule (e.g., daily at 9 AM).
  • Integration: fetch recent Slack messages or GitHub activity.
  • AI Prompt: summarize the key points into a digest.
  • Action: send the summary to a Slack channel or email list.

Data Extraction

  • Trigger: webhook from a document upload or form.
  • AI Prompt: extract structured fields (name, amount, date, etc.) from unstructured text.
  • Transform: parse the JSON output.
  • Action: write the structured data to Google Sheets or Airtable.

Token Usage in Workflows

Each AI Prompt node execution consumes tokens from your plan's AI budget — the same pool shared with standalone agent conversations. On the Builder (free) plan, this is capped at 1M tokens per month. On Team and above, AI is billed at cost with no monthly cap.

The execution detail view shows AI token consumption and estimated cost per node, so you can identify which steps are most expensive and optimize your prompts accordingly.

Reduce token costs

Trim upstream data before passing it to the AI Prompt node. Use a Transform node to extract only the relevant fields rather than sending entire API responses to the model.