Stable Release 0.4.263
It’s been about a year since the last stable release was announced. The criteria used to determine whether a release is stable are described in a previous post. While there has been many stable releases according to these criteria (10 in the first half of 2024), these have not been publicized online. This post rectifies this by going over the highlights since the last stable release.
Release 0.4.263 is based on commit e95a269, which was merged on June 4th 2024. There are 501 commits since the last stable release, and 34 unique contributors (including 14 new ones). Shoutout to the most active committers during that timespan: Sourav Maji, Nisarg Thakkar, Jialin Liu and Sushant Mane!
User-Facing Changes
Let’s start by going over changes to user-facing APIs. There were several additions, and a few removals as well.
Write APIs
The main theme in terms of write APIs is to make it easier to push data to Venice, especially regarding partial update operations. Under the hood, Venice has a concept of “write compute schemas” which are generated automatically by the system whenever a value schema is registered. Partial update operations are encoded according to these schemas, and the first iteration of the API required users to pass data to Venice using that system-generated schema. This worked, and will continue to be supported, but it was a bit tedious and unproductive, so there are now new APIs which make it easier to perform these operations.
Update Builder API
The UpdateBuilder is a Java DSL for declaring operations for each field of interest, including updating the field’s value, and in the case of collection fields, adding or removing elements from them.
Incremental Push Update API
For Incremental Push Jobs, the user can now pass as input any valid subset of fields, even if it does not correspond to a previously registered schema, and the input will be converted to partial updates.
Online Producer API
The Online Producer is a new client library which provides a producer-style API. The fundamental design of Venice remains unchanged, in that writes are still asynchronous (eventually consistent), even when using this new client API. The main benefit is that it makes it easier to emit writes to Venice from any service, rather than requiring a stream processing application.
The producer API supports all operation types, including put, delete and updates (via the new Update Builder API). In addition, it supports optionally specifying a logical timestamp.
Read APIs
In addition to new write APIs, there has also been innovation in terms of reading data out of Venice.
Note that many of the additions in this section are still considered experimental, as will be described in more details below. Unless indicated otherwise, this means the APIs could change or be removed in the future.
Fast Client API
The Fast Client already existed in the previous stable release, but it was scoped down to single get operations only. The single get functionality has been tested at scale for a long time and is now very stable. In addition, the Fast Client has reached functional parity with the Thin Client, now that batch get and read compute operations are supported. The batch get and read compute APIs should still be considered experimental in the sense that they may require further stabilization in the near-term, but they are not at risk of being changed or removed.
Moreover, there is one more experimental client configuration available, which changes the data transport implementation from R2 to gRPC (this also requires configuring the server-side accordingly).
Finally, server-side read quota enforcement is now supported (whereas before it was only supported in the router-side, meaning Fast Client applications were never subject to quota). As such, Fast Client usage can now result in QuotaExceededExceptions (429 status code), which was not the case before. This is an important milestone in the journey towards making Fast Client the default client for remote queries to Venice.
Da Vinci Client API
For the Da Vinci Client, one client configuration has been removed, which is the Non-Local Access Policy. Over 4 years of running Da Vinci Client use cases in production, we have not yet seen a scenario where this mode was needed. In addition, we have seen that some users misunderstood the configuration, enabled it, and suffered production outages as a result. For these reasons, we have decided to reduce the scope of functionality.
There are also two new experimental client configurations:
The Da Vinci Record Transformer is a mode where the user can register a callback that will be invoked for every record consumed by Da Vinci. The callback returns a record, which can be either the same one which was passed in or a different one. This can be used to perform client-side filtering or transformation. In the case of transformations, the returned records must all have the same schema (e.g. containing a subset of fields, if the user is interested in projecting, or entirely new fields which are somehow computed from the original ones). It is also possible to use this API as a “black hole”, meaning that it would never return any record for Da Vinci to persist, but still results in some computation or side-effects taking place (e.g. storing data in a different data structure than RocksDB). These computations should be lightweight, otherwise consumption speed will degrade.
The Large Batch Get Request Split Threshold is an optimization which enables the client to leverage more threads to perform batch gets faster.
Consumer API
In addition to the new producer API, Venice also got a new consumer API, which can be used to build change capture use cases. This one is also experimental and should be treated as a “preview” functionality, which could change or maybe even get removed in the future.
Note that there is some overlap between this consumer API and the aforementioned Da Vinci Record Transformer (if used in the “black hole” pattern). The difference between the two is that the consumer hides full pushes by jumping over automatically to the new dataset version, while the Da Vinci Record Transformer processes full pushes in the same way that Da Vinci usually does (i.e. by loading the future version in the background, and only swapping reads to it after it is sufficiently caught up). If either APIs could do the job, it is recommended to consider using the Da Vinci Record Transformer rather than the Consumer Client.
Operational Improvements
As a mature system that has been running in production for more than 7 years, and which continues to see significant year-over-year scale increases, a large fraction of development effort goes into continuous improvement, including stability, performance, tech debt reduction, etc. Although it is hard to do justice to every one of those changes, this section attempts to highlight some of the main themes.
Push Job
The Push Job has received a number of investments, detailed below.
TTL Repush
Use cases that needed TTL were historically supported by a combination of “empty push” followed by the hybrid store rewind. This worked but was not very efficient, and the cost was especially visible in scenarios with high streaming write throughput. TTL Repush is a replacement for the previous pattern, enabling more of the work to be offloaded to the grid, where resources are usually cheaper, and less work to be performed on the Venice servers, which typically run on more expensive hardware.
MapReduce to Spark Migration
This is a dependency hygiene initiative, since MapReduce is not as well supported nowadays. It should also make it easier to support additional input types. For now, the Push Job supports both compute engines, but eventually, support for the MapReduce engine will be removed. While the Spark mode is already being used in production and appears to be stable so far, it should be noted that it does not yet achieve performance parity with the MapReduce mode. Performance optimization is still on ongoing effort.
Compression Stats Gathering
The Push Job has a new mode where it can perform a compression dry-run on a sample of data for all jobs, even if compression is disabled for that store. The compressibility ratio is then stored in the Push Job Status system store, which can be mined to identify stores offering good compression opportunities.
Server Ingestion
A large amount of work has gone into the server’s ingestion code, to make it more modular, performant, observable and predictable.
Pub Sub Abstraction
In addition to new user-facing producer and consumer APIs, under the hood Venice also got a revamped integration to its pub sub layer. This enables plugging in other pub sub systems besides Kafka. For now, an alternative pub sub system must support numeric offsets and log compaction, just like Kafka, though those constraints are likely to get loosened up in the future.
Heartbeats
The servers now emit a heartbeat control message once per minute per partition for which they are the leader of. This architectural improvement makes it easier to deal with stores which have low or intermittent streaming write traffic. In particular, the time lag metric can now be relied upon to accurately measure end-to-end write latency in all scenarios.
Workload Isolation
A new server configuration enables a separate thread pool for the leader’s real-time processing work, which is more expensive as it needs to perform more steps (conflict resolution, partial updates). The rest of the partitions are still processed in the original thread pool. Splitting these two distinct workloads provides more predictable ingestion performance in scenarios where a cluster carries both high throughput streaming writes and full pushes.
Ingestion Configuration Deprecations
In a future release, two server configurations will be deprecated:
Ingestion Isolation is the functionality where a child process would be forked to handle all batch ingestion, while the parent process took care of streaming ingestion and read queries. Although this functionality was sufficiently stable to be used in production, it introduced a lot of complexity and maintenance burden. Furthermore, in-depth benchmarking revealed that it did not bring significant performance gains for the server workloads. For these reasons, it is considered to be tech debt, and removing it will improve maintainability. Note that Ingestion Isolation will only be removed for servers, and will still be available for Da Vinci Clients, since the implementation for that is much simpler, and the applications carrying Da Vinci Clients have much more varied workloads, some of which may still benefit from the isolation provided by a child process.
Amplification Factor is the functionality where a store could be configured such that the partitions subscribed via the Da Vinci API are each backed by multiple sub-partitions. This was intended to provide more scalable throughput while allowing Da Vinci users to interact with the store as if it had a lower partition count. This functionality was stable for batch-only stores, but not for hybrid stores. Fixing the hybrid store support was considered a lot of effort, and in practice there did not seem to be use cases that really needed this. For these reasons, we will remove support for the Amplification Factor store configuration in a future release.
Closing Remarks
As promised for stable releases, the Docker images for 0.4.263 have been published to DockerHub.
In the future, we intend to publish stable release details more regularly, so the change log should not be as large going forward.
If you have any questions or comments, please reach out on the community Slack!
Subscribe to my newsletter
Read articles from Felix GV directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Felix GV
Felix GV
Co-author of Venice