← writing
·
reactreactflowautomationtypescript

AutoFlow: drag-and-drop workflow automation with ReactFlow

The core idea behind AutoFlow is simple: most automation tasks are just data moving from one service to another with some transformation in the middle. If you can represent that as a graph, you can build a UI for it.

The interesting problems are not in the idea. They are in the execution.

Canvas state vs execution state

ReactFlow gives you a canvas abstraction with nodes and edges. That is the visual layer. But you need a second data model for what actually runs when someone clicks “Execute.”

I made the mistake early on of using the ReactFlow node objects directly as the execution graph. This works until you need to serialize a workflow, validate it before running, or explain why a specific node failed. The ReactFlow state is UI state. The execution model needs to be its own thing.

The fix was a compiler step. When the user hits Execute, the canvas state gets compiled into a DAG of execution nodes. Each execution node has a type (http-request, transform, condition, output), its configuration, and pointers to its dependencies. The runner then does a topological sort and processes nodes in order, passing outputs downstream through a shared context object.

interface ExecutionNode {
  id: string;
  type: NodeType;
  config: Record<string, unknown>;
  dependencies: string[];
}

function compile(nodes: FlowNode[], edges: FlowEdge[]): ExecutionNode[] {
  // build adjacency, validate no cycles, return sorted execution plan
}

Handling async nodes

HTTP request nodes are async. Transform nodes (JSON path extraction, string manipulation) are synchronous. Condition nodes branch the graph. The runner needed to handle all three cleanly.

I ended up with a simple promise-based executor that processes dependency-resolved nodes in parallel where possible:

async function execute(plan: ExecutionNode[], context: Context): Promise<void> {
  const completed = new Set<string>();

  while (completed.size < plan.length) {
    const ready = plan.filter(
      (n) => !completed.has(n.id) && n.dependencies.every((d) => completed.has(d))
    );
    await Promise.all(ready.map((n) => runNode(n, context)));
    ready.forEach((n) => completed.add(n.id));
  }
}

This naive loop is fine for small graphs. For larger workflows you would want a proper scheduler, but this handled everything I threw at it during testing.

The transform node

The transform node is where I spent the most time. Users needed a way to extract a field from an API response and pass it to the next node. I did not want to build a custom expression language, so I settled on JSONPath with a preview panel that evaluated the expression live against the last response.

The preview was the right call. Watching your JSONPath expression update in real time against actual data is a much better debugging experience than submitting the workflow and reading error logs.

Template variables

Nodes support {{variable}} interpolation in their config fields. The runner resolves these from the context object before executing each node. This means you can reference the output of an earlier node in a later node’s URL, body, or headers.

{
  "url": "https://api.example.com/users/{{fetch_user.data.id}}",
  "method": "POST"
}

The interpolation is deliberately shallow: no conditionals, no loops, no function calls. Keeping it simple meant I could implement it in an afternoon and users could learn it in a minute.

Source: github.com/zraisan/autoflow