05-Oct-2022 by Damian Maclennan
There's no such thing as real time.
There’s no such thing as "Real Time". There are just acceptable degrees of latency.
This has become a little catch phrase of mine. I’ve even made a meme.
A lot of people ask that data is displayed in "real time" and my conversation goes a little like this:
Them : "It needs to be real time"
Me : "There’s no such thing as real time, just acceptable degrees of latency"
Them: "What do you mean, it needs to be real time"
Me : "There is no such thing. Between transactions and locking, network latency, request processing, IO etc. It’s not real time. There will be a delay"
Them : "OK, well as quickly as possible"
Me : "What would be an acceptable amount of time?"
Them : "Well a few seconds at most"
Me : "Well that gives us a whole bunch of options, when does your data change?"
People often state a need for "real time" without really thinking it through. While this conversation might seem like somewhat of an obtuse one, it’s actually forcing people to think about the real requirements and constraints, and that is where the real design can start to happen.
With data that changes every minute for example, but you want the change reflected in a couple of seconds, you could use CQRS to build a read model in a NoSQL database, you could refresh a Redis cache after some processing, or you could even cache a whole page or API.
As an aside, somebody once told me that a cache is "admitting you can be wrong for a period of time", and that has stuck with me for years.
Gently pushing back on smart sounding, yet ultimately meaningless requirements is a great way to begin some system modelling.