Skip to content


Staying in Sync: From Transactions to Streams

A talk at QCon London, London, UK, 07 Mar 2016

Abstract

For the very simplest applications, a single database is sufficient, and then life is pretty good. But as your application needs to do more, you often find that no single technology can do everything you need to do with your data. And so you end up having to combine several databases, caches, search indexes, message queues, analytics tools, machine learning systems, and so on, into a heterogeneous infrastructure…

Now you have a new problem: your data is stored in several different places, and if it changes in one place, you have to keep it in sync in the other places, too. It’s not too bad if all your systems are up and running smoothly, but what if some parts of your systems have failed, some are running slow, and some are running buggy code that was deployed by accident?

It’s not easy to keep data in sync across different systems in the face of failure. Distributed transactions and 2-phase commit have long been seen as the “correct” solution, but they are slow and have operational problems, and so many systems can’t afford to use them.

In this talk we’ll explore using event streams and Kafka for keeping data in sync across heterogeneous systems, and compare this approach to distributed transactions: what consistency guarantees can it offer, and how does it fare in the face of failure?

References

  1. Mahesh Balakrishnan, Dahlia Malkhi, Ted Wobber, et al.: “Tango: Distributed Data Structures over a Shared Log,” at 24th ACM Symposium on Operating Systems Principles (SOSP), pages 325–340, November 2013. ISBN: 9781450323888, doi:10.1145/2517349.2522732
  2. Molly Bartlett Dishman and Martin Fowler: “Agile Architecture,” at O’Reilly Software Architecture Conference, March 2015.
  3. Shirshanka Das, Chavdar Botev, Kapil Surlaker, et al.: “All Aboard the Databus!,” at 3rd ACM Symposium on Cloud Computing (SoCC), October 2012.
  4. Pat Helland: “Life beyond Distributed Transactions: an Apostate’s Opinion,” at 3rd Biennial Conference on Innovative Data Systems Research (CIDR), pages 132–141, January 2007.
  5. Pat Helland: “Immutability Changes Everything,” at 7th Biennial Conference on Innovative Data Systems Research (CIDR), January 2015.
  6. Martin Kleppmann: Designing Data-Intensive Applications. O’Reilly Media, to appear.
  7. Jay Kreps: “I ♥︎ Logs.” O’Reilly Media, September 2014. ISBN: 978-1-4919-0932-4
  8. Jay Kreps: “Putting Apache Kafka to use: a practical guide to building a stream data platform,” blog.confluent.io, 25 February 2015.
  9. Leslie Lamport: “Time, Clocks, and the Ordering of Events in a Distributed System,” Communications of the ACM, volume 21, number 7, pages 558–565, July 1978. doi:10.1145/359545.359563
  10. Neha Narkhede: “Announcing Kafka Connect: Building large-scale low-latency data pipelines,” confluent.io, 18 February 2016.
  11. Fred B Schneider: “Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial,” ACM Computing Surveys, volume 22, number 4, pages 299–319, December 1990.
  12. Yogeshwer Sharma, Philippe Ajoux, Petchean Ang, et al.: “Wormhole: Reliable Pub-Sub to Support Geo-replicated Internet Services,” at 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI), May 2015.
  13. Martin Thompson: “Single Writer Principle,” mechanical-sympathy.blogspot.co.uk, 22 September 2011.
  14. Vaughn Vernon: Implementing Domain-Driven Design. Addison-Wesley Professional, February 2013. ISBN: 0321834577