How we organize a complex Rust codebase

At Datalust we’ve been busy building Flare: a storage engine for our log server, Seq, written in the Rust programming language.

This post is a point-in-time look at how we've approached building this fairly complex piece of software in Rust in 2018. I’d like to share a few nitty-gritty details about the Rust codebase itself that, as a developer, I find really interesting.

Development environment

Writing Rust in CLion

We develop Flare using JetBrains CLion on a mix of Windows, Linux and OSX. I’m not finding any glaring holes in the tooling in the run up to the 2018 Rust edition. Rust's development experience has really come a long way since the language shipped 1.0 in 2015.

Channel

Rust is distributed through a few channels with varying levels of stability. Flare is currently targeting the most unstable channel, nightly (with edition = 2018 in our Cargo.toml), but we aim to target the stable 2018 edition when it’s available.

The nightly channel lets us develop against language and standard library features before they’re stabilized. The price we pay for early access is that these features may change in breaking ways at any point before they do stabilize. We might need to do a little unstable feature rejigging before we can target the stable channel, but we're being careful to only depend on unstable features that are either likely candidates for stabilization by the 2018 edition, or are easy enough for us to polyfill ourselves.

Here's the current set of unstable library and language features we use along with a one-liner justification pulled from our lib.rs:

// For sanity
#![feature(nll)]

// Easier to create safe synchronization primitives
#![feature(optin_builtin_traits)]

// For converting Rust results into FFI results
#![feature(try_trait)]

// For using the `RangeBounds` trait
#![feature(range_contains)]

// Result <-> Option
#![feature(transpose_result)]

// For using the `SliceIndex` trait
#![feature(slice_index_methods)]

// For micro benchmarking internals
#![cfg_attr(test, feature(test))]

It's been exciting to watch the number of features in that list dwindle since we started development as they've stabilized. In Flare we use a lot of impl Trait in argument position for types like impl AsRef<Path>, impl RangeBounds<UniqueTimestamp>, and impl IntoIterator<Item = Event<impl Read>>. I’m also looking forward to being able to use impl Trait in return position more once existential types (that allow us to give a name to an impl Trait type and store it in a structure without needing generics) stabilize. I couldn’t live now without non-lexical lifetimes, but I still haven’t quite re-written muscle memory to take advantage of match ergonomics.

Overall the 2018 edition is proving to be a very well-rounded experience compared to what we shipped in 2015, that's comfortable for day-to-day use. It just feels like a fuller realization of the same language.

Dependencies

As of writing, Flare depends on the following libraries in the Rust ecosystem:

bincode = "~1"
bit-vec = "~0.4"
byteorder = "~1"
bytes = "~0.4"
cfg-if = "~0.1"
failure = "~0.1"
failure_derive = "~0.1"
fs2 = "~0.4"
lazy_static = "~1"
libc = "~0.2"
memmap = "~0.6"
parking_lot = "~0.5"
rand = "~0.4"
serde = "~1.0.62"
serde_derive = "~1"
snap = "~0.2"
uuid = "~0.6.5"
winapi = "~0.3.5"

You can also find a list of our dependencies along with some more details in our Acknowledgements page in the Seq docs.

Source organization

Rust projects are organized into crates which form a single distribution unit, like a static object or a DLL. Source within a crate is organized into modules, which are like namespaces but provide an additional privacy boundary. Not everything in a module has to be publicly exported to other modules.

We've structured Flare as a single crate, where modules are organized around privacy boundaries for independent features of the engine. The following tree shows the current organization of the flare crate. It's fairly large, so I've put it in a collapsed section:

src
├── c_abi
│   ├── is_null.rs
│   ├── mod.rs
│   ├── result.rs
│   ├── stream_read.rs
│   └── thread_bound.rs
├── crc32.rs
├── direction.rs
├── endianness.rs
├── error.rs
├── event.rs
├── file_writer.rs
├── ingest_buf
│   ├── frame.rs
│   ├── inspector.rs
│   ├── mod.rs
│   ├── reader.rs
│   └── writer.rs
├── inspection.rs
├── key_value_store
│   ├── frame.rs
│   └── mod.rs
├── lib.rs
├── lock_file
│   ├── inspector.rs
│   └── mod.rs
├── log
│   ├── macros.rs
│   └── mod.rs
├── macros.rs
├── metrics
│   ├── macros.rs
│   └── mod.rs
├── page.rs
├── page_index
│   ├── file.rs
│   ├── hit_map.rs
│   ├── index_expr.rs
│   ├── index_expr_parser.rs
│   ├── index_set.rs
│   └── mod.rs
├── params.rs
├── snappy
│   ├── compress.rs
│   ├── decompress.rs
│   ├── error.rs
│   └── mod.rs
├── stable_span
│   ├── builder.rs
│   ├── frame.rs
│   ├── hit.rs
│   ├── inspector.rs
│   ├── metrics
│   │   ├── mod.rs
│   │   └── vm.rs
│   ├── mod.rs
│   ├── reader.rs
│   ├── seek.rs
│   └── vm.rs
├── std_ext
│   ├── fs.rs
│   ├── io.rs
│   ├── iter.rs
│   ├── mod.rs
│   ├── num.rs
│   ├── ops.rs
│   ├── option.rs
│   ├── range.rs
│   ├── rc.rs
│   ├── result.rs
│   ├── slice.rs
│   ├── sync.rs
│   └── vec.rs
├── store
│   ├── coalesce.rs
│   ├── complete_guard.rs
│   ├── deleter.rs
│   ├── file_set.rs
│   ├── indexer.rs
│   ├── ingest_buf
│   │   ├── mod.rs
│   │   ├── reader.rs
│   │   └── std_ext
│   │       ├── iter.rs
│   │       └── mod.rs
│   ├── live_set.rs
│   ├── metadata
│   │   ├── index_set.rs
│   │   ├── ingest_buf.rs
│   │   ├── mod.rs
│   │   ├── params.rs
│   │   └── stable_span.rs
│   ├── migrations
│   │   ├── mod.rs
│   │   └── stable_span_physical_range.rs
│   ├── mod.rs
│   ├── mvcc
│   │   ├── deferred.rs
│   │   ├── gen.rs
│   │   ├── mod.rs
│   │   ├── model.rs
│   │   └── set.rs
│   ├── nucleus
│   │   ├── maintenance
│   │   │   ├── coalesce_into_buf.rs
│   │   │   ├── coalesce_into_span.rs
│   │   │   ├── mod.rs
│   │   │   ├── set_index.rs
│   │   │   └── truncate.rs
│   │   ├── mod.rs
│   │   ├── operations.rs
│   │   ├── read.rs
│   │   └── write.rs
│   ├── ranges.rs
│   ├── reader.rs
│   ├── stable_span
│   │   ├── mod.rs
│   │   └── reader.rs
│   ├── test_util.rs
│   ├── truncate.rs
│   └── writer.rs
├── test_util.rs
└── typed_file.rs

The most notable root modules are ingest_buf, stable_span and store. That's are where the bulk of the storage logic lives.

The ingest_buf module contains a storage format that's optimized for writes. Events are appended out-of-order to ingest buffers, which cover a short, fixed time-range. Reading from ingest buffers means scanning the entire file and sorting its events before yielding them. They’re fast to write, but slower to read.

The stable_span module contains a storage format that's optimized for reads. are immutable on disk once they’re written. Events are written up-front in order, and page-based indexes are built of the contents. They’re slower to write, but can be very fast to read.

The store module manages the state of a running instance of the storage engine, including its active ingest buffers, stable spans, and indexes. Transactions on the store are implemented through multi-version concurrency control (MVCC).

We facade the root ingest_buf and stable_span modules under store::ingest_buf and store::stable_span. This lets us vary the implementation and API of ingest_buf and stable_span without breakage leaking deeply into the store module. Treating root modules as if they were independent crates can help guide their content and public API to support better maintainability.

Tools and libraries that support development, like benchmarks, fuzzing targets, and diagnostic tools live in separate crates under the same workspace.

Managing unsafety

Rust is sometimes described as two languages: the standard safe Rust, and unsafe Rust. Unsafe Rust can only be written within unsafe blocks. This makes it easy to audit where unsafe operations can be called, but doesn’t necessarily help understand why the block was necessary in the first place.

We use a macro to declare unsafe blocks instead of the unsafe keyword directly. This macro ensures we include some text alongside the unsafe code that explains how we guarantee its safety, or what invariants callers need to protect:

/**
Allow a block of `unsafe` code with a reason.

The macro will expand to an `unsafe` block.
*/
macro_rules! unsafe_block {
    ($reason:tt => $body:expr) => {{
        #[allow(unsafe_code)]
        let r = unsafe { $body };
        r
    }};
}

We #![deny(unsafe_code)] in the crate root and #[allow(unsafe_code)] in the unsafe_block! so that forgetting to use the macro and just writing unsafe { ... } results in a compile error. This is really just to enforce a convention to make our unsafe code more auditable. In usage, unsafe blocks look something like this:

let mut read_into = unsafe_block!(
    "Our writers don't read from the uninitialized buffer" => {
        buf.bytes_mut()
    }
);

We currently have 34 unsafe blocks excluding FFI function definitions, mostly working with the low-level bytes API or memory mapping files. We have similar macros for unsafe functions and unsafe impl blocks.

Finding a home for cross-cutting concerns

Over time, any codebase will accumulate a pile of standard helpers and utilities with no obvious organization. In Rust, these could take the form of extension traits to types from the standard library, or other very general traits and datastructures.

We have a special root std_ext module to contain these cross-cutting concerns that follows the same module structure as std. Mirroring std helps keep the extensions organized rather than becoming fragmented throughout the rest of the codebase, or from becoming a random grab-bag of arbitrary items that's hard to explore.

As an example, we have a std_ext::option module that contains an OptionMutExt trait:

pub(crate) trait OptionMutExt<T> {
    /** Map and mutate an option in place. */
    fn map_mut<F>(&mut self, f: F) -> &mut Self
    where
        F: FnOnce(&mut T);

    /** Replace an option if it doesn't contain a value. */
    fn or_else_mut<F>(&mut self, f: F) -> &mut Self
    where
        F: FnOnce() -> Option<T>;
}

impl<T> OptionMutExt<T> for Option<T> {
    fn map_mut<F>(&mut self, f: F) -> &mut Self
    where
        F: FnOnce(&mut T),
    {
        match *self {
            Some(ref mut t) => f(t),
            None => (),
        }
        self
    }
    
    fn or_else_mut<F>(&mut self, f: F) -> &mut Self
    where
        F: FnOnce() -> Option<T>,
    {
        if self.is_none() {
            *self = f();
        }
        self
    }
}

Most of the extensions in std_ext are simple ergonomic methods for specific contexts, like map_mut. Some are a bit more fundamental though. As another example, we have a few traits that extend std::io::Read for data sources that offer additional guarantees:

/**
A trait for resetting a stateful reader.

Calling `reset` on a reader will reset any internal state so it
can be read from the beginning again.
*/
pub(crate) trait ResetRead
where
    Self: Read,
{
    /** Reset the reader. */
    fn reset(&mut self) -> Result<(), io::Error>;
}

impl<'a, R: ?Sized> ResetRead for &'a mut R
    where
        R: ResetRead,
{
    fn reset(&mut self) -> Result<(), io::Error> {
        (**self).reset()
    }
}

impl<B> ResetRead for Cursor<B>
where
    B: AsRef<[u8]>,
{
    fn reset(&mut self) -> Result<(), io::Error> {
        self.seek(SeekFrom::Start(0))?;
        Ok(())
    }
}

/** A trait for readers that know their size upfront. */
pub(crate) trait SizeRead
where
    Self: Read,
{
    /** Get the number of remaining bytes in the reader. */
    fn remaining(&self) -> usize;
}

impl<'a, R: ?Sized> SizeRead for &'a mut R
where
    R: SizeRead,
{
    fn remaining(&self) -> usize {
        (**self).remaining()
    }
}

impl<B> SizeRead for Cursor<B>
where
    B: AsRef<[u8]>,
{
    fn remaining(&self) -> usize {
        self.get_ref().as_ref().len() - self.position() as usize
    }
}

/** A trait for readers that wrap some inner value that they can be converted into. */
pub trait DetachRead: Read {
    type Detached;

    fn detach(self) -> Self::Detached;
}

impl<B> DetachRead for Cursor<B>
where
    B: AsRef<[u8]>,
{
    type Detached = B;

    fn detach(self) -> Self::Detached {
        self.into_inner()
    }
}

/**
A trait for readers that work on blocks of memory.

This trait is slightly different to `bytes::Buf` in that memory isn't guaranteed
to be contiguous.
*/
pub(crate) trait MemRead {
    /**
    A reader that abstracts getting bytes from a source that
    isn't contained in a single chunk.
    */
    type Reader: DetachRead<Detached = Self> + ResetRead + SizeRead;
    
    /**
    Try get a contiguous slice of bytes representing all data.

    If this method returns `None` then this `MemRead` will need to
    be converted into `Reader` to access the bytes.
    */
    fn bytes(&self) -> Option<&[u8]>;
    
    /**
    Get a reader over all data.

    The reader can be converted back into this same `MemRead`.
    */
    fn into_reader(self) -> Self::Reader;
}

impl<B> MemRead for B
where
    B: AsRef<[u8]>,
{
    type Reader = Cursor<B>;
    
    fn bytes(&self) -> Option<&[u8]> {
        Some(self.as_ref())
    }
    
    fn into_reader(self) -> Self::Reader {
        Cursor::new(self)
    }
}

The ResetRead trait is useful for retrying reads when callers across the FFI boundary supply buffers that are too small.

The DetachRead trait is useful for readers that wrap an inner value they can be safely converted into. Cursor<impl AsRef<[u8]>> is an example of this. We also use it for our compressing and decompressing readers that detach to the inner reader and an 'empty' compressor with internal buffers that can be re-used.

The MemRead trait is useful in places we'd typically use AsRef<[u8]> because it lets us deal with cases where a payload is split across multiple buffers, while potentially optimizing for cases when it's contiguous.

These std::io::Read extensions are heavily used throughout the rest of the codebase.

Maintaining invariants that cover all struct fields

Sometimes a method on a structure needs to maintain some invariant that applies to all of its fields. This can be a maintenance footgun if new fields are added later but not considered in the method.

We try avoid introducing bugs through newly added fields by using a combination of destructuring patterns and a function-scoped #![deny(unused_variables)] to ensure new fields result in compile errors for places they need to be used. The following example is how we implement a ResetRead trait (described in the std_ext section) for a decompressor that holds internal state:

impl<R> ResetRead for Decompress<R> where R: ResetRead {
    fn reset(&mut self) -> Result<(), io::Error> {
        #![deny(unused_variables)]
        let Decompress {
            ref mut src,
            dec: ref _dec,
            ref mut compressed,
            ref mut decompressed,
            ref mut read_head,
        } = *self;
        
        src.reset()?;
        compressed.clear();
        decompressed.clear();
        *read_head = 0;
        
        Ok(())
    }
}

A new field added to Decompress will result in a compile error, reminding us to consider it when resetting. If that field isn't relevant than it needs to be explicitly signaled as such in the destructuring pattern.

What else?

Rust is a language that offers a lot of tools for managing complexity in large codebases. I’ve shared how we use just a few of those tools in Flare. There's plenty more to explore; including interop with .NET, cross-platform packaging, testing, and fuzzing.

Ashley Mannix

Read more posts by this author.