After building a webhook delivery platform in Day 8, I found myself asking questions that led me down a rabbit hole. After looking at Recipient List, I realized I overlooked “Routing Slip” and, really, what the heck is a “routing slip” anyway? While I had planned to cover Process Manager and Saga, digging into Routing Slip gave me a really interesting insight into how one can structure their messaging topology differently and in a way that naturally introduces a kind of Process Manager on its own, so lets save the in-depth exploration of it and Saga for another day and dig in to Routing Slip!

Routing Slip: The Message Knows Where to Go

The Routing Slip pattern confused me at first, but the value clicked once I saw it in action. The idea: a message carries its own itinerary, a list of processing steps it needs to pass through.

Think of it like a FedEx package with multiple delivery stops. The package has a slip attached: “First go to Memphis hub, then Chicago sorting, then final delivery.” Each stop reads the slip, does its job, and sends it to the next destination. The package doesn’t care how it gets routed; it just carries the instructions.

Why is this cool? Because different messages can take different paths without changing the routing logic. A simple order might skip enrichment; a complex international order might add customs validation. You could choose which processors a message will pass through based on message content or business rules.

How it works: if you recall in Day 2, we explored Message Endpoints and Messaging Gateways and how they can be utilized to abstract away the sending and receiving of messages. In this particular case, we design the publish endpoint used in our application to pop itself off the routing slip and publish a message to the destination indicated next in the list. This could be designed however you see fit: maybe it’s a Kafka topic, an SQS queue, RabbitMQ exchange, whatever. The idea is that there is some logic for any consumer to use the routing slip to broadcast to the next destination.

Try It Yourself: Interactive Routing Slip Demo

To understand this better, I built a Phoenix LiveView application that demonstrates the Routing Slip pattern with real-time visualization. The idea is to let you register GenServer nodes as workers dynamically with custom names, then build a routing slip by selecting those nodes, fill out a payload, and click “Send Message” to watch the journey unfold in real time.

The demo shows each node:

  • Adding itself to the “visited” list with a step number
  • Popping itself from the routing slip
  • Forwarding to the next destination
  • Broadcasting updates via PubSub for live UI updates

Try it out: The demo is available at https://github.com/jamescarr/routing-slip-example. Clone the repo, run mix phx.server, and visit /routing-slip to see it in action. The README has full setup instructions and an architecture diagram showing how the components fit together.

The implementation uses:

  • GenServer processes for each routing node (dynamically spawned with custom names)
  • Registry for dynamic node discovery by name
  • DynamicSupervisor for spawning nodes on demand
  • Phoenix.PubSub for real-time updates to the UI
  • LiveView for the interactive interface with real-time message tracking

Here’s the actual Node GenServer from the demo. Each node is spawned dynamically with a user-provided name:

defmodule RoutingExamples.RoutingSlip.Node do
  @moduledoc """
  A GenServer that represents a node in the routing slip pattern.
  Each node receives messages, processes them, and forwards to the next destination.
  """
  use GenServer
  
  alias Phoenix.PubSub

  @pubsub RoutingExamples.PubSub
  @topic "routing_slip:updates"
  @forward_delay_ms 500

  # Client API - nodes are registered via Registry with their name
  def start_link(name) when is_binary(name) do
    GenServer.start_link(__MODULE__, name, name: via_tuple(name))
  end

  def process_message(node_name, message) do
    GenServer.cast(via_tuple(node_name), {:process_message, message})
  end

  def via_tuple(name) do
    {:via, Registry, {RoutingExamples.RoutingSlip.NodeRegistry, name}}
  end

  # Server callbacks
  @impl true
  def init(name) do
    state = %{name: name, messages_processed: 0, created_at: DateTime.utc_now()}
    PubSub.broadcast(@pubsub, @topic, {:node_created, name})
    {:ok, state}
  end

  @impl true
  def handle_cast({:process_message, message}, state) do
    %{routing_slip: routing_slip, visited: visited, payload: payload, id: message_id} = message
    
    # Add ourselves to visited with the current step number
    step_number = length(visited) + 1
    new_visited = visited ++ [{state.name, step_number, DateTime.utc_now()}]

    # Pop ourselves from the routing slip
    [_current | remaining_slip] = routing_slip

    updated_message = %{
      id: message_id,
      payload: payload,
      routing_slip: remaining_slip,
      visited: new_visited
    }

    new_state = %{state | messages_processed: state.messages_processed + 1}

    # Broadcast that we processed this message (for real-time UI updates)
    PubSub.broadcast(@pubsub, @topic, {:message_processed, state.name, updated_message})

    # Forward to next destination if there is one
    case remaining_slip do
      [next_destination | _rest] ->
        Process.send_after(self(), {:forward_message, next_destination, updated_message}, @forward_delay_ms)
      [] ->
        PubSub.broadcast(@pubsub, @topic, {:message_completed, updated_message})
    end

    {:noreply, new_state}
  end
  
  @impl true
  def handle_info({:forward_message, next_destination, message}, state) do
    process_message(next_destination, message)
    {:noreply, state}
  end
end

Nodes are created dynamically via DynamicSupervisor and discovered through Registry… you can see the full context module and supervisor setup in the repository.

What is interesting with this approach is that each node only knows about itself and the next destination on the slip. It reads its name from the slip, does its work (adding itself to “visited”), and routes to whatever comes next. No node needs to know what other nodes exist in the system.

With a Message Broker

This same pattern can be leveraged regardless of what messaging infrastructure you are using. To take this example further, I decided to implement this same routing slip application using RabbitMQ, as diagrammed below.

The diagram shows the different flows through the system:

  • Blue lines: Creation activities (declaring queues, binding them to exchanges, starting consumers)
  • Green lines: User-initiated message flow (submitting a message with its routing slip)
  • Purple lines: Message flow between consumers (each node processing and forwarding to the next)
  • Orange lines: “Exhaust” events (consumers broadcasting UI updates back to the LiveView)

In this configuration, the Phoenix Application is responsible for two concerns: defining the message processors and sending messages with a user-configured routing slip. During the creation phase (blue), it creates new nodes by declaring queues (named node.xxx), binding them to a topic exchange based on routing key, then starts a new consumer to consume messages from that queue.

The interface allows users to select destinations and submit a message, which the Messenger.RabbitMQ module will submit to the routing_slip.messages topic with the first routing key in the routing slip. The message flows to the designated queue, is picked up by the consumer which (for simplicity) just adds itself to the message history, pops itself from the routing slip, waits 500ms and then submits to the routing_slip.messages with the next destination.

In our pure GenServer setup, all the action happened within the Phoenix application, so it was easy to do a direct LiveView update. With this topology, consumers need a way to get status updates back to the UI. This is where the distinction between Messages and Events becomes useful:

  • Messages (purple lines) carry the routing slip payload between nodes. They’re the work flowing through the system.
  • Events (orange lines) are notifications about what happened. They don’t carry the work; they announce “node X just processed message Y.”

So we have an EventsListener that consumes an exclusive queue connected to routing_slip.events. Consumers publish events to this fanout exchange whenever they process a message, which the listener picks up and broadcasts to the LiveView subscriber. The events are “exhaust” from the message processing pipeline, informing the UI without affecting the workflow itself.

The Messenger.RabbitMQ module acts as a kind of Messaging Gateway by encapsulating the RabbitMQ-specific logic, and a Message Endpoint since it encapsulates both sending and receiving logic from the application. By leveraging this approach, nothing stops us from creating Messenger.SQS, Messenger.Kafka or Messenger.NATS as gateways that we could just drop into the application and use.

You can check out the code examples and run them on github under the demo-with-broker tag.

When Routing Slip Shines

The pattern works well when you have:

  • Variable processing paths based on message content
  • Dynamic step insertion (add fraud check for high-value orders)
  • Audit requirements (the slip’s history shows exactly what happened)
  • Loosely coupled producers that don’t need to know about consumers

Orchestration vs Choreography

Building this routing slip demo highlighted an important architectural distinction I keep coming back to: orchestration vs choreography.

I’ve built systems in the past where we fired off a message, a consumer consumed it and published another message for another processor, which in turn published a message at the end for yet another consumer. Each worker knew where to send its output. That’s choreography: services react to events and publish their own, with no central coordinator. The workflow emerges from independent services doing their thing.

The problem? When we needed to add a new step or change the order, we had to touch multiple workers. Each one had coupling baked in: knowledge of who comes next. Debugging was a nightmare of correlation IDs and log aggregation.

Orchestration flips this around. One component (the orchestrator) knows the workflow and tells each service what to do. Services become simpler: they just do their job and report back.

Routing Slip is an interesting middle ground. The initial producer acts as the orchestrator by defining the message’s itinerary upfront. But after that, there’s no central coordinator. Each node just reads the slip and passes the message along. You get orchestration’s visibility with choreography’s decentralization.

AspectRouting Slip (Orchestration)Event Choreography
VisibilityClear path, easy to debugEmergent behavior, harder to trace
CouplingProducer knows stepsEach service knows its downstream
Adding stepsChange producer or use a routerUpdate multiple workers
OrderingGuaranteed sequentialMust be designed carefully
Failure handlingSlip tracks what ranNeed correlation IDs

My take: Use Routing Slip when you need guaranteed ordering and clear visibility into multi-step processing. Use choreography when services should be truly independent and you have good observability tooling.

In practice, I could see using a hybrid: a lightweight orchestrator generates the Routing Slip based on message content, keeping the workflow knowledge in one place rather than scattered across producers.

If you want to dig deeper into this topic, Bernd RΓΌcker’s talk “Balancing Choreography and Orchestration” (GOTO 2020) is excellent. While the talk focuses on Sagas, he spends significant time unpacking orchestration vs choreography, making a compelling case that choreography can sometimes increase coupling when done poorly. Worth the watch:

For an academic treatment, the paper Microservices Orchestration vs. Choreography: A Decision Framework by Megargel, Poskitt, and Shankararaman provides a systematic decision framework with case studies from Danske Bank and Netflix.

A Simple Form of Process Manager

Here’s what surprised me about this exercise: what we’ve built is actually a lightweight example of the Process Manager pattern.

Think about it: the routing slip maintains state (visited nodes, remaining destinations) and coordinates the workflow. But unlike a traditional Process Manager that lives as a central service, our “manager” is the message itself. Each node simply reads “what’s next?” from the slip, does its work, and passes it along. The workflow intelligence travels with the data.

But Process Manager can be so much more. What happens when:

  • A workflow needs to wait hours for an external response?
  • You need to handle complex branching based on accumulated state?
  • Failures require undoing previous steps (not just stopping)?

These questions lead us to two powerful related patterns: Saga (for compensation and rollback) and the full Process Manager (for stateful, long-running workflows). I’ll explore both in the next post.

What I Learned

The Routing Slip pattern was a genuine discovery for me. I’d been hardcoding message paths for years, dealing with the maintenance headaches that come with it, without realizing there was a pattern for making them dynamic.

Building both the GenServer and RabbitMQ implementations really drove home how the pattern stays the same regardless of transport. The Messenger abstraction let me swap implementations with a config change. That’s the power of good interfaces. The pattern is about how messages flow, not what carries them.

What’s Next

Now that we understand how messages can carry their own routing, the natural questions are: What happens when things go wrong? How do you undo distributed work? Who tracks state across long-running workflows?

Next up: Saga and Process Manager, patterns for when your workflows need memory, compensation, and coordination over time.

See you next time! πŸŽ„


This post is part of the Advent of Enterprise Integration Patterns series. I’m learning these patterns as I write about them, so if you spot something off, let me know! All patterns referenced are from the classic book Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf.

A note on AI usage: I used Claude as a writing assistant for this series, particularly for generating code samples that illustrate the patterns. The patterns, architectural insights, and real-world experiences are mine. I believe in transparency about these tools.

comments powered by Disqus