Over the past eleven posts, we’ve explored Enterprise Integration Patterns from the ground up: building message channels, routing messages with Content-Based Routers and Recipient Lists, transforming data, managing workflows with Process Managers, and observing systems with Wire Taps. We wrote a lot of code. While I lost track of my original intent to build an open source audit logging system as part of this series, I feel pretty happy about what did get built.
But here’s the thing: you don’t always have to build these patterns from scratch. Frameworks exist that have implemented these patterns for decades. Today we’ll explore two of them: Apache Camel, the veteran of enterprise integration, and Redpanda Connect (formerly Benthos), a modern data streaming toolkit.
Why Use a Framework?
Building integration patterns by hand teaches you how they work. But in production, frameworks offer:
- Battle-tested implementations: Edge cases you haven’t thought of? They have.
- Declarative configuration: Express routing logic in DSLs or YAML instead of imperative code
- Ecosystem connectors: Hundreds of pre-built integrations with databases, queues, APIs, and cloud services
- Observability built-in: Metrics, tracing, and logging out of the box
The trade-off is abstraction. You gain velocity but lose some control. For many integration problems, that’s the right trade.
Apache Camel: The OG Integration Framework
Apache Camel has been around since 2007. It literally implements the patterns from the EIP book. The creators worked directly with Gregor Hohpe, and the framework’s DSL mirrors the book’s vocabulary.
I actually wrote about Camel in a past life back in 2010, demonstrating how it could hide middleware concerns from your domain code. Sixteen years later, the core ideas still hold up. The framework has evolved, but the pattern-first approach remains the same.
Patterns as First-Class Citizens
Remember building a Content-Based Router by hand? In Camel:
from("kafka:orders")
.choice()
.when(jsonpath("$.priority == 'high'"))
.to("kafka:high-priority-orders")
.when(jsonpath("$.priority == 'standard'"))
.to("kafka:standard-orders")
.otherwise()
.to("kafka:bulk-orders");
The DSL reads like the pattern description. No need to wire up consumers, manage offsets, or handle serialization. Camel handles it.
Recipient List
Our Recipient List implementation required tracking recipients and routing dynamically. In Camel:
from("kafka:notifications")
.recipientList(method(recipientResolver, "resolveRecipients"))
.parallelProcessing();
The recipientList EIP is built in. You provide a method that returns the list of destinations, and Camel handles the fan-out.
Wire Tap
Wire Tap is a single line:
from("kafka:orders")
.wireTap("aws2-sqs:order-audit-queue")
.to("direct:processOrder");
Every message gets copied to an SQS queue for auditing without affecting the main flow. Notice how easily Camel bridges different messaging systems.
Routing Slip
The Routing Slip pattern we built with headers? Native in Camel:
from("direct:start")
.routingSlip(header("routingSlip"));
Camel reads the slip from the header and routes through each endpoint in sequence.
Dead Letter Channel
Error handling with Dead Letter Channels:
errorHandler(deadLetterChannel("kafka:dead-letters")
.maximumRedeliveries(3)
.redeliveryDelay(1000)
.useExponentialBackOff());
from("kafka:orders")
.process(orderProcessor);
Retries, backoff, and dead lettering configured declaratively.
The Camel Ecosystem
Camel supports 300+ components:
- Messaging: Kafka, RabbitMQ, ActiveMQ, AWS SQS, Google Pub/Sub
- Databases: JDBC, MongoDB, Cassandra, Redis
- Cloud: AWS (S3, Lambda, DynamoDB), GCP, Azure
- APIs: HTTP, REST, GraphQL, gRPC
- Files: FTP, SFTP, local filesystem
You can run Camel standalone, embedded in Spring Boot, or on Kubernetes with Camel K.
Redpanda Connect: Modern Data Pipelines
Redpanda Connect (formerly Benthos) takes a different approach. Where Camel is a Java framework with a programmatic DSL, Redpanda Connect is a standalone binary configured entirely with YAML.
Declarative Pipelines
A basic pipeline that reads from Kafka, transforms, and writes to S3:
input:
kafka:
addresses: ["localhost:9092"]
topics: ["orders"]
consumer_group: "order-archiver"
pipeline:
processors:
- mapping: |
root = this
root.processed_at = now()
root.source = "order-system"
output:
aws_s3:
bucket: "order-archive"
path: "orders/${!timestamp_unix()}.json"
No code. Just configuration. Run it with rpk connect run pipeline.yaml.
Content-Based Routing
Route messages based on content:
input:
kafka:
addresses: ["localhost:9092"]
topics: ["events"]
pipeline:
processors:
- switch:
- check: this.type == "order"
processors:
- mapping: 'root = this'
output:
kafka:
addresses: ["localhost:9092"]
topic: "orders"
- check: this.type == "notification"
processors:
- mapping: 'root = this'
output:
kafka:
addresses: ["localhost:9092"]
topic: "notifications"
Message Transformation
Remember our message transformation patterns? Redpanda Connect uses Bloblang, a powerful mapping language:
pipeline:
processors:
- mapping: |
# Content Enricher: add metadata
root = this
root.enriched = true
root.region = env("AWS_REGION")
# Content Filter: remove sensitive fields
root.customer.ssn = deleted()
root.customer.credit_card = deleted()
# Claim Check: extract large payload
root.payload_ref = uuid_v4()
root.payload = deleted()
Wire Tap with Redpanda Connect
Fan out to multiple outputs:
input:
kafka:
addresses: ["localhost:9092"]
topics: ["orders"]
output:
broker:
outputs:
# Primary: process orders
- kafka:
addresses: ["localhost:9092"]
topic: "order-processing"
# Wire Tap: audit log
- aws_s3:
bucket: "audit-logs"
path: "orders/${!timestamp_unix()}.json"
# Wire Tap: analytics
- http_client:
url: "https://analytics.example.com/ingest"
verb: POST
Dead Letter Handling
input:
kafka:
addresses: ["localhost:9092"]
topics: ["orders"]
pipeline:
processors:
- try:
- http:
url: "https://api.example.com/process"
verb: POST
catch:
- mapping: |
root = this
root.error = error()
root.failed_at = now()
- output:
kafka:
addresses: ["localhost:9092"]
topic: "dead-letters"
The Redpanda Connect Ecosystem
Redpanda Connect includes 200+ connectors:
- Inputs: Kafka, AMQP, AWS (SQS, S3, Kinesis), GCP Pub/Sub, NATS, MQTT, HTTP, WebSocket
- Outputs: All of the above plus Elasticsearch, ClickHouse, Snowflake, BigQuery
- Processors: Mapping, filtering, batching, deduplication, rate limiting, caching
Comparing the Two
| Aspect | Apache Camel | Redpanda Connect |
|---|---|---|
| Language | Java (with Kotlin, Groovy, XML) | YAML + Bloblang |
| Deployment | JVM app, Spring Boot, Camel K | Single binary, Docker |
| Learning curve | Steeper (Java ecosystem) | Gentler (just YAML) |
| Flexibility | Extremely high (full Java) | High (processors, Bloblang) |
| EIP coverage | Complete (designed for it) | Partial (data-focused) |
| Use case | Complex enterprise integration | Data pipelines, ETL |
Choose Camel when:
- You need the full EIP pattern catalog
- You’re in a Java/JVM shop
- Complex orchestration with Process Managers
- You need custom processors in code
Choose Redpanda Connect when:
- You want zero-code data pipelines
- Quick prototyping and iteration
- ETL and data movement between systems
- You prefer declarative configuration
Other Frameworks Worth Knowing
A few other tools that implement EIP patterns are listed below. Spring Integration is something else I have used in a past life, but the others below I have not used at all. Keep your eyes peeled though as I plan to look into Temporal in a future blog post!
- Spring Integration: Spring’s answer to EIP, deeply integrated with the Spring ecosystem
- Broadway (Elixir): Data processing pipelines with back-pressure, batching, and fault tolerance built on GenStage
- Temporal (Python, TypeScript, Go, Java): Workflow orchestration with durable execution, essentially Process Manager as a service
Wrapping Up the Series
We’ve come a long way over these twelve days:
Day 1: Integration Styles — File Transfer, Shared Database, RPC, and Messaging. Why messaging wins for loose coupling.
Day 2: Message Endpoints & Pipes and Filters — How applications connect to messaging systems and compose processing steps.
Day 3: Message Channels — Point-to-Point vs Publish-Subscribe. The plumbing that connects everything.
Day 4: Message Routing — Content-Based Router, Recipient List, Splitter, Aggregator. Getting messages where they need to go.
Day 5: Message Types & Event Strategies — Command, Event, and Document messages. Fat vs thin events, event notification vs event-carried state.
Day 6: Canonical Data Model & Transformation — Message Translator, Content Enricher, Content Filter, Claim Check. Shaping data as it flows.
Day 7: Request-Reply & Correlation — Correlation Identifier, Return Address. Making async feel sync when you need it.
Day 8: Webhook Delivery Platform — Guaranteed Delivery, Dead Letter Channel, retry strategies. Building production-grade delivery.
Day 9: Routing Slip — Dynamic routing where the message carries its own itinerary.
Day 10: Process Manager — Orchestrating multi-step workflows with state machines and message history.
Day 11: Wire Tap & Control Bus — Observing message flows and managing infrastructure through messaging itself.
Day 12: Frameworks — Apache Camel and Redpanda Connect. Standing on the shoulders of giants.
The Enterprise Integration Patterns book was published in 2003, but these patterns remain foundational. Whether you’re building microservices, event-driven systems, or data pipelines, you’ll encounter them. Now you understand what they are, when to use them, and how to implement them, both from scratch and with battle-tested frameworks.
Going Deeper
The EIP book was just the beginning. Gregor Hohpe has continued evolving these ideas:
- Gregor’s Ramblings — Years of blog posts exploring integration, architecture, and cloud
- Conversation Patterns — Patterns for long-running, stateful message exchanges (think sagas and choreography)
- The Architect Elevator — Gregor’s newer writing on cloud architecture, serverless, and organizational dynamics
- 37 Things One Architect Knows About IT Transformation — How these patterns fit into broader digital transformation
The conversation patterns are particularly worth exploring. They extend EIP into territory we touched on with Process Manager: how do you coordinate complex, long-running interactions across distributed systems?
What’s Next
Advent is over, but this is just the beginning. Throughout 2026, I’ll be diving deeper into these patterns and their implementation on modern cloud platforms. There’s a rich ecosystem of pattern catalogs worth exploring:
- Azure Cloud Design Patterns — Microsoft’s excellent catalog covering messaging, resilience, and data management patterns
- AWS Prescriptive Guidance Patterns — Implementation patterns for AWS services
- Microservices.io — Chris Richardson’s comprehensive microservices pattern catalog
- Cloud Architecture Patterns — AWS Well-Architected serverless lens
We’ll explore how Step Functions, EventBridge, Azure Service Bus, Cloud Workflows, and other managed services implement (or complicate) these timeless patterns. How does the Saga pattern look in Step Functions? What’s the Control Bus equivalent in Kubernetes? How do you build a Dead Letter Channel in EventBridge?
If you’ve enjoyed this series, stick around. There’s much more to come and I always like to dig in deep with examples.
Thanks for following along with the Advent of Enterprise Integration Patterns. Happy building!
References
- Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf
- Apache Camel Documentation
- Redpanda Connect Documentation
- Bloblang Guide
Part of the Advent of Enterprise Integration Patterns series. Patterns from Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf.