C++ Object Streaming - Part II
In Part I we discussed our overall strategy for object streaming, which needs to be handled carefully to support deserializing objects in-place in memory. We introduced the streaming operator, how to stream simple types such as arithmetic types, and streaming fixed size arrays. Here we'll discuss how to stream runtime arrays and types that need custom streaming methods.
Where to Stream Runtime Arrays
The trick with serializing runtime-sized arrays is figuring out where to stream them into the buffer. If it is a fixed-size array embedded directly in the object (e.g. an Array
class), then you can simply serialize it right where the Array
class is. However, if you have a pointer to an array dynamically allocated somewhere in memory, then we can't just simply do the same. That's because we want to support deserializing in-place, and the next location in the buffer may be needed to hold the next member in this object, or in perhaps a parent object.
For example, suppose we are serializing the following class Widget
:
template <typename DataType>
class Vector
{
//...
int mSize;
int mCapacity;
DataType* mArray;
};
class Widget
{
Vector<int> mVector; //This contains a dynamic array
int mOther;
};
If we serialize the dynamic array in mVector
into the stream buffer directly after the mArray
pointer, it will be written to the memory where mOther
needs to go! It needs to instead be written after not only the Widget
object, but after the top-level object that contains the Widget
object.
How do we keep track of where that is? Well, the best way to do it is to keep track of two separate indices in our stream buffer, the current index and the end index:
template <typename DerivedType>
class StreamBase
{
//...
private:
int mCurrentIndex = 0;
int mEndIndex = 0;
};
Then when we stream a top-level (non-nested) object, we set the end index to be after the object memory. When we've finished streaming it, we advance the current index to the end. We'll (internally) call this function for every type of streaming (de(serializing), destreaming) we'll be doing:
template <typename DerivedType>
bool StreamBase<DerivedType>::Is_MidStreaming() const
{
return (mCurrentIndex != mEndIndex);
}
template <typename DerivedType>
template <typename ObjectType, typename StreamCallback>
inline void StreamBase<DerivedType>::Stream(StreamCallback&& aCallback)
{
//See if this is a top-level object (not deep in recursion)
bool cIsMidStreaming = !Is_MidStreaming();
//The data must be aligned to its required alignment within the stream
//This allows us to instantiate in-place during deserialize if needed
Align<alignof(ObjectType)>();
//If top-level, set the end index to past the end of the object
auto cCurrentIndex = mCurrentIndex;
if(cIsMidStreaming)
mEndIndex = cCurrentIndex + sizeof(ObjectType);
//(De)serialize or destream
aCallback();
//If we're at the end of a top-level object (no longer mid-streaming),
//advance the current index to the end of the data to resume there
if(cIsMidStreaming)
mCurrentIndex = mEndIndex;
}
Embedded Arrays
Now there are two different types of arrays whose size is only known at runtime: those allocated dynamically, and those embedded directly into the object. The main example of an embedded runtime-sized array is a StaticVector
, where its capacity is a compile-time template parameter. These are extremely useful for streaming when we know the maximum size of the array is small: we won't have to make many small dynamic allocations when deserializing (non-in-place: to pre-existing objects).
So we'll break the logic up for these into separate methods, first for serializing an embedded array:
template <typename DerivedType>
template <typename ObjectType>
void SinkBase<DerivedType>::Serialize_RuntimeArray_Embedded(const ObjectType* aArray, int aNumObjects)
{
if constexpr (Is_DirectlySerializable<DerivedType, ObjectType>())
{
Align<Types::Align_Of<ObjectType>()>();
auto cByteArray = reinterpret_cast<const std::byte*>(aArray);
auto cNumBytes = aNumObjects * sizeof(ObjectType);
auto cByteSpan = Wrappers::Make_ConstSpan(cByteArray, cNumBytes);
Serialize_Bytes(cByteSpan);
}
else
{
for(Types::RangeSize ci = 0; ci < aNumObjects; ++ci)
*this << aArray[ci];
}
}
This looks very similar to that of a fixed array, except of course the array size is inputted instead of extracted from the type at compile time. Since the array is embedded in the object itself, we don't need to play any games with mCurrentIndex
: we can serialize it directly to the current location. The idea is similar for deserializing and destreaming, so these methods are not shown.
Dynamic Arrays
To serialize a dynamic array, we will first serialize the offset to where the array data will be written, and then skip ahead to that location and write the array data. Calculating this offset is trickier than it sounds though, because we have to be careful that both the offset and the array data are properly aligned. So let's start by just looking at the first part of the serialization function, where we calculate and serialize that offset:
//Class template parameters removed for brevity
template <typename ObjectType>
void SinkBase::Serialize_RuntimeArray(const ObjectType* aArray,
int aNumObjects)
{
//Find where we'll write the array offset data (adjust current-index for alignment!)
static constexpr auto sOffsetAlignment = alignof(std::intptr_t);
auto cOffsetWriteIndex = ::Align<sOffsetAlignment>(mCurrentIndex);
//Find where we'll write the array data (adjust end-index for alignment!)
static constexpr auto sArrayAlignment = alignof(ObjectType);
auto cIsMidStreaming = Is_MidStreaming(); //if false, is after pointer!
auto cArrayWriteIndex = cIsMidStreaming ? mEndIndex : mEndIndex + sizeof(int);
cArrayWriteIndex = ::Align<sArrayAlignment>(cArrayWriteIndex);
//Serialize the offset to where we'll write the array
auto cArrayOffset = cArrayWriteIndex - cOffsetWriteIndex;
*this << cArrayOffset;
//To be continued ...
First note that we are writing the offset as a std::intptr_t
. This is because the offset is being written to the memory location where the pointer to the array will deserialize in-place, thus it needs to be the size of a pointer.
Now we can't just assume that we'll write this to mCurrentIndex
though, we must first take into account that we may need to align the offset itself. The same goes for array data: we must correct mEndIndex
for its alignment also. Once we have the indices where the offset and the array data will be written, we can finally subtract them to get and serialize the offset.
Finally, we need to skip ahead to where we'll write the array, serialize the array, and clean things up:
//Continuing Serialize_RuntimeArray()
//First save the current index
auto cPreArrayIndex = mCurrentIndex;
//Skip to the end for writing the array
mCurrentIndex = cArrayWriteIndex;
//Update the new end index as being after this array,
//so that when the top-level object is finished we resume there,
//AND in case there are more arrays further nested (to write after this)
auto cTotalNumBytes = aNumObjects * sizeof(ObjectType);
mEndIndex = cArrayWriteIndex + cTotalNumBytes;
//DO IT
Serialize_RuntimeArray_Embedded(aArray, aNumObjects);
//If mid-streaming: Go back to original index for other member variables
//If not: Skip to end index (nested arrays!) to resume there
mCurrentIndex = cIsMidStreaming ? cPreArrayIndex : mEndIndex;
}
Since we're managing all of the indexing gymnastics in this function, to actually serialize the array data we can simply call the function we wrote for the embedded runtime arrays.
Streaming Container Classes - Array
Let's look at some examples of how to stream some user-defined classes. The simplest one of which is Array
:
template <typename DataType, int Size>
class Array
{
public:
auto Get_Members() const { return std::tie(mData); }
auto Get_Members() { return std::tie(mData); }
private:
DataType mData[Size];
};
Because it's a container, we can't simply stream Array
directly. For example, if DataType
itself contains dynamic arrays or has custom streaming methods, we can't simply byte-copy the Array
contents. However, because the class is relatively simple we can just create Get_Members()
functions for it that return a std::tuple
of references to the data members (const references for const Arrays
).
When an Array
object is serialized, the operator<<
function back in Part I will detect the presence of these functions, and route the work to the Stream_Members()
function:
enum class StreamMode
{
Sink = 0, Source, InPlaceSource, Destream
};
//Class template parameters removed for brevity
template <StreamMode Mode, typename ObjectType>
void StreamBase::Stream_Members(ObjectType& aObject)
{
using ByteType = std::conditional_t<std::is_const_v<ObjectType>, const
std::byte, std::byte>;
auto cObjectAddress = reinterpret_cast<ByteType*>(&aObject);
auto cExpectedAddr = cObjectAddress;
For_EachInTuple(aObject.Get_Members(), [&](auto& aMemberObject)
{
//Add padding between member variables as needed
*this += reinterpret_cast<ByteType*>(&aMemberObject) - cExpectedAddr;
using ObjectType = std::remove_reference_t<decltype(aMemberObject)>;
auto& cDerived = *static_cast<DerivedType*>(this);
//Stream
if constexpr (Mode == StreamMode::Sink)
cDerived << aMemberObject;
else if constexpr (Mode == StreamMode::Source)
cDerived >> aMemberObject;
else if constexpr (Mode == StreamMode::Destream)
cDerived << Meta::nTypeList<ObjectType>;
else
cDerived >> Meta::nTypeList<ObjectType>;
cExpectedAddr += sizeof(aMemberObject);
});
//Add any padding bytes at the end of the object
aStream += sizeof(ObjectType) - (cExpectedAddr - cObjectAddress);
}
This function simply loops over the class data members retrieved by the tuple and streams them individually. Note that by handling the alignment manually, we can cover cases where the member variables have custom alignment (alignas
-specifier).
Finally, here are the helper methods associated with object member streaming. Working with fold expressions is tricky, so these wrapper functions help with making these types of operations easier to do.
template <typename ObjectType>
constexpr bool Has_ConstGetMembers()
{
//This must return a tuple of const refs to the members in layout order
return requires (const ObjectType& aData) { { aData.Get_Members() }; };
}
template <typename Callable, typename... ArgTypes>
constexpr void For_EachArgument(Callable& aCallable, ArgTypes&&... aArgs)
{
//Invokes callable once for each argument
(aCallable(std::forward<ArgTypes>(aArgs)), ...);
}
template <typename Callable, typename... DataTypes>
constexpr void For_EachInTuple(const std::tuple<DataTypes...>& aTuple,
Callable&& aCallable)
{
auto cForEachWrapper = [&]<typename... ArgTypes>(ArgTypes&& ...aData)
{
For_EachArgument(aCallable, std::forward<ArgTypes>(aData)...);
};
std::apply(cForEachWrapper, aTuple);
}
Streaming Container Classes - Dynamic Array
To stream DynamicArray
containers, we need to define custom streaming routines. Here is the abbreviated DynamicArray
class definition:
template <typename DataType>
class DynamicArray
{
public:
template <typename SinkType>
void Serialize(SinkType& aSink) const;
//...
private:
int mCapacity = 0;
DataType* mArray = nullptr;
}
And the custom streaming methods:
//Class template parameters removed for brevity
template <typename SinkType>
void DynamicArray::Serialize(SinkType& aSink) const
{
aSink << mCapacity;
aSink.Serialize_RuntimeArray(mArray, mCapacity);
}
template <typename SourceType>
void DynamicArray::Deserialize(SourceType& aSource)
{
aSource >> mCapacity;
Allocate(aSource.Get_MemoryAllocator(), mCapacity); //Allocate memory
aSource.Deserialize_RuntimeArray(mArray, mCapacity);
}
template <typename SourceType>
void DynamicArray::Deserialize_InPlace(SourceType& aSource)
{
aSource >> TypeTag<int>{};
aSource.Deserialize_RuntimeArray_InPlace<DataType>(mCapacity);
}
template <typename DestreamType>
void DynamicArray::Destream(DestreamType& aDestreamer)
{
aDestreamer << TypeTag<int>{};
aDestreamer.Destream_RuntimeArray<DataType>(mCapacity);
}
That's it! All we had to do was stream the class members in-order, using the appropriate methods. The following helper functions are the remaining methods used to determine how to direct the work during the serializer's operator<<
:
template <typename SinkType, typename ObjectType>
constexpr bool Has_Serialize()
{
return requires (const ObjectType& aData, SinkType& cSink)
{
aData.Serialize(cSink);
};
}
template <typename SinkType, typename ObjectType>
constexpr bool Is_DirectlySerializable()
{
return !Has_Serialize<SinkType, ObjectType>() &&
!Has_ConstGetMembers<ObjectType>() && !std::is_pointer_v<ObjectType>
&& !std::is_bounded_array_v<ObjectType> &&
std::is_standard_layout_v<ObjectType>;
}
Streaming Container Classes - StaticVector
A StaticVector
is a container with a compile-time capacity but a runtime size. Objects are placement-new
'd into its member byte buffer as they are added to the container. Its abbreviated class definition:
template <typename DataType, int Capacity>
class StaticVector
{
//...
private:
static constexpr auto sArraySize = sizeof(DataType) * Capacity;
int mSize = 0;
alignas(alignof(value_type)) std::byte mArray[sArraySize];
};
And the implementation of its (public) custom streaming methods:
//Class template parameters removed for brevity
constexpr int StaticVector::Get_NumStreamPaddingBytes() const
{
return sizeof(DataType) * (Capacity - mSize);
}
template <typename SinkType>
void StaticVector::Serialize(SinkType& aSink) const
{
aSink << mSize;
aSink.Serialize_RuntimeArray_Embedded(data(), mSize);
aSink += Get_NumStreamPaddingBytes();
}
template <typename SourceType>
void StaticVector::Deserialize(SourceType& aSource)
{
aSource >> mSize;
aSource.Deserialize_RuntimeArray_Embedded(data(), mSize);
aSource += Get_NumStreamPaddingBytes();
}
template <typename SourceType>
void StaticVector::Deserialize_InPlace(SourceType& aSource)
{
aSource >> TypeTag<int>{};
aSource.Deserialize_RuntimeArray_InPlace_Embedded<DataType>(mSize);
aSource += Get_NumStreamPaddingBytes();
}
template <typename DestreamType>
void StaticVector::Destream(DestreamType& aDestreamer)
{
aDestreamer << TypeTag<int>{};
aDestreamer.Destream_RuntimeArray_Embedded<DataType>(mSize);
aDestreamer += Get_NumStreamPaddingBytes();
}
This is very similar to the custom DynamicArray
methods, except the array is embedded into the object itself, so we don't have to skip ahead in the stream buffer. Also we have to advance the streamers (effectively adding padding bytes) after the end of the data for the unused array capacity. This is done during serialization to make sure we set aside enough memory for deserializing this object in-place.
Conclusion
Serializing runtime-sized arrays is tricky, as we have to write the array data further ahead in the stream where it won't interfere with our future in-place deserializing. However with these generic methods, it is trivial to write custom streaming methods for user-defined classes containing them.
If/When C++ gets standardized reflection this system should be revisited, but that may be a long way off in the future. In the meantime, this streaming system is generic, powerful, extensible, and doesn't rely on evil macro-magic and/or third-party tools to implement.
Subscribe to my newsletter
Read articles from Paul Mattione directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Paul Mattione
Paul Mattione
I'm a Principal Programmer working on the Engine Team at Volition Games! The content of this blog is my own, personal work and views, and is not endorsed by any other party.