Kafka Schema Evolution Demo (Avro)

This interactive demo explains schema evolution, compatibility rules and contract enforcement in Apache Kafka using Avro schemas.

It demonstrates how Spring Kafka applications should validate schemas before producing data, preventing breaking changes from reaching production.

Why Schema Contracts Matter

Kafka messages are binary. Without a schema, data is meaningless. Schema Registry acts as a central contract authority between producers and consumers.

This demo shows how these rules apply in practice.

Schema Compatibility Rules (Avro)

Interactive Schema Evolution Demo

Schema v1 – Baseline
{
  "type": "record",
  "name": "OrderEvent",
  "fields": [
    { "name": "orderId", "type": "string" },
    { "name": "amount", "type": "double" }
  ]
}
      

Initial contract used by all consumers.

Schema v2 – Compatible
{
  "type": "record",
  "name": "OrderEvent",
  "fields": [
    { "name": "orderId", "type": "string" },
    { "name": "amount", "type": "double" },
    { "name": "currency", "type": ["null", "string"] }
  ]
}
      

Adds optional field → safe evolution.

Schema v3 – Breaking
{
  "type": "record",
  "name": "OrderEvent",
  "fields": [
    { "name": "orderId", "type": "string" }
  ]
}
      

Removes required field → breaking change.

Select schemas and run the compatibility check.

How This Applies to Spring Kafka

In real applications, this logic is enforced automatically by Schema Registry and by tools like:

👉 spring-kafka-contract-starter

With contract enforcement enabled: