Skip to content

Reference Data

Note

This page is aimed at systems without strict performance requirements.

For systems with strict performance requirements, it is not abnormal for the cluster to run on the absolute minimum amount of reference data required to operate, and to use alternative streams - typically via Aeron Archive - to move reference data around to the gateways while avoiding the cluster. If your performance requirements are strict enough, consider a consulting engagement.

Given the constraints of both Little's Law and Replicated State Machines, care must be taken with the efficiency of business logic. Depending on what kind of system you're developing, this can mean that you need to bring in large volumes of reference data into the cluster.

When bootstrapping the cluster, it's typical to stream all the reference data that the business logic needs into the cluster before it starts accepting transactional requests. Moreover, if you're going to be moving message customization and validation out to the edges (i.e. Cluster Clients), it's likely that you will need to replicate all or subsets of the cluster's reference data.

If this applies for your system, then four questions need to answered:

  • how to deal with reference data in snapshots?
  • how to bring reference data into the cluster?
  • how to store reference data within the cluster?
  • how to replicate reference data to cluster clients?

Cluster Snapshots & Reference Data

If you're working with pure Real Logic libraries, and need to perform snapshotting, you're likely to end up with a system that looks something like this:

Inbound Commands Outbound Events Snapshot Medium Flyweight Flyweight Domain Model Snapshot Model SBE SBE Flyweights + DirectBuffers and/or POJO + best data structures for the job SBE+ SBE IR Reference Data

In this model, your cluster codec is SBE based, your domain model is likely a mixture of Java objects & the most appropriate data structures (likely including some from Agrona) and/or flyweights (from SBE or not) on top of DirectBuffers for your domain logic, and an independent SBE based snapshot model.

By keeping a clear distinction between your commands & events, the domain model and your snapshot model, the cluster:

  • commands and events are free to evolve at their own pace; with independent versioning as required
  • can have the domain model pre-compute some data as a part of a reference data load
  • has flexibility to modify the domain model as needed, without needing to change the snapshot model
  • can bring extra metadata into the snapshot model, such as the Git SHA digest of the process that produced it. This can be very useful in production debugging scenarios.
  • can version the snapshot model independently, which can simplify upgrade procedures
  • can include SBE Intermediate Representation with the snapshots, allowing for simpler introspection of the snapshot at runtime with OTF decoders

The alternate approach is to unify one or all of the commands & events, domain model and snapshot models. This too has benefits, but it's likely you would not want to make use of SBE for this approach.

Bringing Reference Data into the Cluster

At cluster start, and likely at any time during operations, the cluster will need to adopt new or updated reference data. There are two scenarios to be concerned with here:

  • If the reference data is a small data set, the loading the reference data in is a simple task - just have a Cluster Client send it in at maximum rate. Standard Aeron processes of back-pressure and related might kick in, but it's unlikely you would need any particular protocol or extra complexity would be needed beyond some small buffer size changes
  • If the reference data set is large, you're likely to hit back pressure on data load. To solve this, you will need to introduce a reference data replication protocol. This protocol may operate one way for data load, and another way for data updates.

A possible protocol for bringing large data volumes into the cluster might operate as follows:

  • the cluster starts and loads any snapshots required
  • a data source cluster client starts, connects to the cluster, and introduces itself as the data source cluster client
  • the cluster requests a reference data load - either everything, or all the data changed since the most recent snapshot loaded - from the data source cluster client
  • the data source cluster client loads the data from some data store and chunks it up
  • the data source cluster client submits the first chunk to the cluster
  • the cluster acknowledges the chunk, requests the next chunk
  • this process then repeats until all chunks are available in the cluster

By having the cluster request each chunk, the risk of back pressure is significantly reduced when compared with having the data source cluster client send all chunks in after the initial request for a reference data load.

Storing Reference Data in the Cluster

How reference data is stored within the cluster really depends on your access patterns and performance requirements. For example, it's typical for large datasets to be indexed on multiple fields (for example, you may have to iterate through a collection of reference data based upon values in different fields), so be sure that which ever solution you have supports this.

Using Standard Java objects, held within efficient data structures

  • can offer high performance, albeit slower than that achievable with DirectBuffers & Flyweights
  • subject to heavier GC pauses than DirectBuffers & Flyweights - care must be taken when using and adding new objects
  • far simpler to reason about since it's just plain old Java objects
  • simple to debug, since standard Java IDE Type Renderers will likely support all fields in the objects
  • indexing can be done using various techniques - multiple hashmaps, bitsets, etc.
  • easy to add fields and new business logic
  • transactional processing can be very hard to achieve

Using DirectBuffers & Flyweights

  • can be highly advantageous as you're able to reduce the impact of GC
  • indexing can be more complex, and possibly result in more allocations and GC activity
  • can involve big runtime allocations if you're forced to resize buffers holding reference data at runtime due to volume increases
  • developers need to constantly be aware of the position of the flyweight on the DirectBuffer - is it going to overwrite the incorrect data?
  • debugging can become much more complex since Java IDE Type Renderers don't support DirectBuffers
  • transactional processing can be simpler to implement - although events emitted during processing may make this more complex
  • limited to certain data types, or alternatively you may need to do perform suboptimal data conversions to properly store complex types such as BigDecimal.

Replicating Reference Data to Cluster Clients

How you store the reference data will impact options for replicating the data to cluster clients.

  • If you're using plain old Java objects to store your data, you could consider reusing or adapting the protocol used to push data into the cluster during bootstrap. Care should be taken to ensure that all cluster clients agree on what the reference data is at any moment in time.
  • If you're using DirectBuffers, you could ignore the fact that the data contained is reference data, and just define a protocol to replicate byte buffers. Care needs to be taken around changing data if large volumes of data are held within DirectBuffers - for example, if you move one chunk of buffer data to a cluster client, and a subsequent cluster command modifies the reference data within the DirectBuffer but before you've completed replication, then how do you ensure that the replicated DirectBuffer remains valid?