Shotover logo

Shotover Proxy is an open source, high performance L7 data-layer proxy for controlling, managing and modifying the flow of database requests in transit. It can be used to solve many different operational and interoperability challenges for teams where polyglot persistence (many different databases) is common.

The following pages are a good place to learn more:

  • Introduction for more information on what Shotover Proxy is, why it exists and some of the underlying philosophies behind it.
  • Getting started guide for details on how to jump straight in and get up and running.
  • Concepts for a deeper dive into some of the fundamental shotover concepts.

Deploying Shotover

Shotover can be deployed in a number of ways, it will generally be based on the problem you are trying to solve, but they all fall into three categories:

  • As an application sidecar - Shotover is pretty lightweight, so feel free to deploy it as a sidecar to each of your application instances.
  • As a stand alone proxy - If you are building a Service/DBaaS/Common data layer, you can deploy Shotover on standalone hardware and really let it fly.
  • As a sidecar to your database - You can also stick Shotover on the same instance/server as your database is running on, we do it, so we won't judge you.

Roadmap

  • Support relevant xDS APIs (so Shotover can play nicely with service mesh implementations).
  • Support hot-reloads and a dynamic configuration API.
  • Additional sources (DynamoDB and PostgreSQL are good first candidates).
  • Add support for rate limiting, explicit back-pressure mechanisms, etc.
  • Additional Distributed algorithm transform primitives (e.g RAFT, 2PC, etc).
  • Additional sink transforms (these generally get implemented alongside sources).
  • Support user-defined / generated sources (e.g. thrift or a gRPC service from a proto definition).
  • Simulation testing once tokio-rs/simulation reaches compatibility with tokio-2.0
  • zero-copy pass-through transforms and in-place query editing (performance).

Name

Shotover refers to the Shotover (Kimi-ākau) river in Otago, New Zealand - close to Queenstown and eventually flowing into Lake Wakatipu via the Kawarau River, it's famous for white water rafting, bungy-jumping, fast rapids and jet boating.

Introduction

Use Cases

The majority of operational problems associated with databases come down to a mismatch in the suitability of your data model/queries for the workload or a mismatch in behaviour of your chosen database for a given workload. This can manifest in many different ways, but commonly shows up as:

  • Some queries are slow for certain keys (customers/tenants etc).
  • Some queries could be implemented more efficiently (queries not quite right).
  • Some tables are too big or inefficient (data model not quite right).
  • Some queries are occur far more than others (hot partitions).
  • I have this sinking feeling I should have chosen a different database (hmmm yeah... ).
  • My database slows down over time (wrong indexing scheme, compaction strategy, data no longer fits in memory).
  • My database slows down for a period of time (GC, autovacuum, flushes).
  • I don't understand where my queries are going and how they are performing (poor observability at the driver level).

These challenges are all generally discovered in production environments rather than testing. So fixing and resolving these quickly can be tricky, often requiring application and/or schema level changes.

Shotover aims to make these challenges simpler by providing a point where data locality, performance and storage characteristics are (somewhat) decoupled from the application, allowing for on the fly, easy changes to be made queries and data storage choices without the need to change and redeploy your application.

Longer term, Shotover can also leverage the same capability to make operational tasks easier to solve a number of other challenges that come with working multiple databases. Some of these include:

  • Data encryption at the field level, with a common key management scheme between databases.
  • Routing the same data to databases that provide different query capabilities or performance characteristics (e.g. indexing data in Redis in Elasticsearch, easy caching of DynamoDB data in Redis).
  • Routing/replicating data across regions for databases that don't support it natively or the functionality is gated behind proprietary "open-core" implementations.
  • A common audit and AuthZ/AuthN point for SOX/PCI/HIPAA compliance.

Design principals / goals

Shotover prioritises the following principals in the order listed:

  1. Security
  2. Durability
  3. Availability
  4. Extensibility
  5. Performance

Shotover provides a set of predefined transforms that can modify, route and control queries from any number of sources to a similar number of sinks. As the user you can construct chains of these transforms to achieve the behaviour required. Each chain can then be attached to a "source" that speaks the native protocol of you chosen database. The transform chain will process each request with access to a unified/simplified representation of a generic query, the original raw query and optionally (for SQL like protocols) a parsed AST representing the query.

Shotover proxy currently supports the following protocols as sources:

  • Cassandra (CQLv4)
  • Redis (RESP2)

Shotover performance

Shotover compiles down to a single binary and just takes a single YAML file and some optional command line parameters to start up. When running a small topology (5 - 10 transforms, 1 or 2 sources, 200 or so TCP connections) memory consumption is rather small with a rough working set size between 10 - 20mb.

Currently benchmarking is limited, but we see around 100k req/s per single logical core for a 1:1 request model. However due to the way Shotover is implemented, it will largely go as fast as your upstream datastore can go. Each tcp connection is driven by a single tokio thread and by default Shotover will use 4 to 8 OS threads for the bulk of it's work (this is user configurable). Occasionally it will spawn additional OS threads for long running non-async code. These are practically unbounded (as defined by Tokio) but use is rare.

Individual transforms can also dramatically impact performance as well.

Shotover will not try to explicitly pipeline, aggregate or batch requests (though feel free to write a transform to do so!) unless it is built into the source protocol (e.g. RESP2 supports cmd pipelining) or via a transform. Most client drivers support connection pooling and multiple connections, so feel free to ramp up the number of outbound sockets to get the best throughput. Shotover will happily work with 100's or 1000's of connections due to its threading model.

Performance hasn't been a primary focus during initial development and there are definitely some easy wins to improve things.

Getting Started

Setup

  1. Download & Extract - You can find the latest release of Shotover Proxy at our GitHub release page. So download and extract from there onto your Linux machine. Alternatively you can build and run from source.
  2. Run - cd into the extracted shotover folder and run ./shotover-proxy. Shotover will launch and display some logs.
  3. Examine Config - Shotover has two configuration files:
    • config/config.yaml - This is used to configure logging and metrics.
    • config/topology.yaml - This defines how Shotover receives, transforms and delivers messages.
  4. Configure topology - Open topology.yaml in your text editor and edit it to define the sources and transforms you need, the comments in the file will direct you to suitable documentation. Alternatively you can refer to the Deployment Scenarios section for full topology.yaml examples.
  5. Rerun - Shotover currently doesn't support hot-reloading config, so first shut it down with CTRL-C. Then rerun ./shotover-proxy for your new config to take effect.
  6. Test - Send a message to Shotover as per your configuration and observe it is delivered to it's configured destination database.

To see Shotover's command line arguments run: ./shotover-proxy --help

Deployment scenarios

Full topology.yaml examples configured for a specific use case:

Core Concepts

Shotover has a small number of core concepts or components that make up the bulk of it's architecture. Once understood, quite complex behaviour and environments can be managed with Shotover.

Source

A source is the main component that listens for traffic from your application and decodes it into an internal object that all Shotover transforms can understand. The source will then send the message to a transform chain for processing / routing.

Transform

Transforms are where Shotover does the bulk of it's work. A transform is a single unit of operation that does something to the database request that is in flight. This may be logging it, modifying it, sending it to an external system or anything else you can think of. Transforms can either be terminating (pass messages on to subsequent transforms on the chain) or non-terminating (return a response without calling the rest of the chain). Transforms that send messages to external systems are called sinks.

Transform Chain

A transform chain is a ordered list of transforms that a message will pass through. Messages are received from a source. Transform chains can be of arbitary complexity and a transform can even have its own set of sub chains. Transform chains are defined by the user in Shotover's configuration file and are linked to sources.

Topology

A topology is how you configure Shotover. You define your sources, your transforms in a transform chain and then assign the chain to a source.

Configuration

Shotover proxy accepts a two seperate YAML based configuration files. A configuration file specified by --config-file and a topology file specified by --topology-file

configuration.yaml

The configuration file is used to change general behavior of Shotover. Currently it supports two values:

  • main_log_level
  • observability_interface

main_log_level

This is a single string that you can use to configure logging with Shotover. It supports env_filter style configuration and filtering syntax. Log levels and filters can be dynamically changed while Shotover is still running.

observability_interface

Shotover has an observability interface for you to collect Prometheus data from. This value will define the address and port for Shotover's observability interface. It is configured as a string in the format of 127.0.0.1:8080 for IPV4 addresses or [2001:db8::1]:8080 for IPV6 addresses. More information is on the observability page.

topology.yaml

The topology file is currently the primary method for defining how Shotover behaves. Within the topology file you can configure sources, transforms and transform chains.

The below documentation shows you what each section does and runs through an entire example of a Shotover configuration file.

sources

The sources top level resource is a map of named sources, to their definitions.

The sources section of the configuration file allow you to specify a source or origin for requests. You can have multiple sources and even multiple sources of the same type. Each is named to allow you to easily reference it.

A source will generally represent a database protocol and will accept connections and queries from a compatible driver. For example the Redis source will accept connections from any Redis (RESP2) driver such as redis-py.

---
# The source section
sources:
  
  # The configured name of the source
  my_named_redis_source:
    # The source and any configuration needed for it
    # This will generally include a listen address and port
    Redis:
      listen_addr: "127.0.0.1:6379"
  
  # The configured name of the source
  my_cassandra_prod:

    # The sources and any configuration needed for it
    # This will generally include a listen address and port
    Cassandra:
      listen_addr: "127.0.0.1:9042"

chain_config (Chain Configuration)

The chain_config top level resource is a map of named chains, to their definitions.

The chain_config section of the configuration file allows you to name and define a transform chain. A transform chain is represented as an array of transforms and their respective configuration. The order in which a transform chain, is the order in which a query will traverse it. So the first transform in the chain, will get the request from source first, and pass it to the second transform in the chain.

As each transform chain is synchronous, with each transform being able to call the next transform in it's chain, the response from the upstream database or generated by a transform down the chain will be passed back up the chain, allowing each transform to handle the response.

The last transform in a chain should be a "terminating" transform. That is, one that passes the query on to the upstream database (e.g. CassandraSinkSingle) or one that returns a Response on it's own ( e.g. DebugReturner).

For example

chain_config:
  example_chain:
    - One
    - Two
    - Three
    - TerminatingTransform

A query from a client will go:

  • Source -> One -> Two -> Three -> TerminatingTransform

The response (returned to the chain by the TerminatingTransform) will follow the reverse path:

  • TerminatingTransform -> Three -> Two -> One -> Source

Under the hood, each transform is able to call it's down-chain transform and wait on it's response. Each Transform has it's own set of configuration values, options and behavior. See Transforms for details.

The following example chain_config has three chains:

  • redis_chain - Consists of a Tee, a transform that will copy the query to the named topic and also pass the query down-chain to a terminating transform RedisSinkSingle which sends to the query to a Redis server. Very similar to the tee linux program.
  • main_chain - Also consists of a Tee that will copy queries to the same topic as the redis_chain before sending the query onto caching layer that will try to resolve the query from a redis cache before ending up finally sending the query to the destination Cassandra cluster via a CassandraSinkSingle
# This example will replicate all commands to the DR datacenter on a best effort basis
---
chain_config:
  # The name of the first chain
  redis_chain:
    # The first transform in the chain, in this case it's the Tee transform
    - Tee:
        behavior: Ignore
        # The number of message batches that the tee can hold onto in it's buffer of messages to send.
        # If they arent sent quickly enough and the buffer is full then tee will drop new incoming messages.
        buffer_size: 10000
        #The child chain, that Tee will asynchronously pass requests to
        chain:
          - QueryTypeFilter:
              filter: Read
          - Coalesce:
              flush_when_buffered_message_count: 2000
          - QueryCounter:
              name: "DR chain"
          - RedisSinkCluster:
              first_contact_points: [ "127.0.0.1:2120", "127.0.0.1:2121", "127.0.0.1:2122", "127.0.0.1:2123", "127.0.0.1:2124", "127.0.0.1:2125" ]
              connect_timeout_ms: 3000
    #The rest of the chain, these transforms are blocking
    - QueryCounter:
        name: "Main chain"
    - RedisSinkCluster:
        first_contact_points: [ "127.0.0.1:2220", "127.0.0.1:2221", "127.0.0.1:2222", "127.0.0.1:2223", "127.0.0.1:2224", "127.0.0.1:2225" ]
        connect_timeout_ms: 3000

source_to_chain_mapping Chain Mapping

The source_to_chain_mapping top level resource is a map of source names to chain name. This is the binding that will link a defined source to chain and allow messages/queries generated by a source to traverse a given chain.

The below snippet would complete our entire example:

source_to_chain_mapping:
 redis_prod: redis_chain

This mapping would effectively create a solution that:

  • All Redis requests are first batched and then sent to a remote Redis cluster in another region. This happens asynchronously and if the remote Redis cluster is unavailable it will not block operations to the current cluster.
  • Subsequently, all Redis actions get identified based on command type, counted and provided as a set of metrics.
  • The Redis request is then transform into a cluster aware request and routed to the correct node

Metrics

This interface will serve Prometheus metrics from /metrics. The following metrics are included by default, others are transform specific.

NameLabelsData typeDescription
shotover_transform_totaltransformcounterCounts the amount of times the transform is used
shotover_transform_failurestransformcounterCounts the amount of times the transform fails
shotover_transform_latencytransformhistogramThe latency for running transform
shotover_chain_totalchaincounterCounts the amount of times chain is used
shotover_chain_failureschaincounterCounts the amount of times chain fails
shotover_chain_latencychainhistogramThe latency for running chain
shotover_available_connectionssourcegaugeThe number of connections currently connected to source

Metric data types

Counter

A single value, which can only be incremented, not decremented. Starts out with an initial value of zero.

Histogram

Measures the distribution of values for a set of measurements and starts with no initial values.

Gauge

A single value that can increment or decrement over time. Starts out with an initial value of zero.

Log levels and filters

You can configure log levels and filters at /filter. This can be done by a POST HTTP request to the /filter endpoint with the env_filter string set as the POST data. For example:

curl -X PUT -d 'info,shotover_proxy=info' http://127.0.0.1:9001/filter

Sources

SourceImplementation Status
CassandraAlpha
RedisBeta

Cassandra

Cassandra:
  # The address to listen from.
  listen_addr: "127.0.0.1:6379"

  # The number of concurrent connections the source will accept.
  connection_limit: 1000

  # Defines the behaviour that occurs when Once the configured connection limit is reached:
  # * when true: the connection is dropped.
  # * when false: the connection will wait until a connection can be made within the limit.
  hard_connection_limit: false

  # When this field is provided TLS is used when the client connects to Shotover.
  # Removing this field will disable TLS.
  #tls:
  #  # Path to the certificate file, typically named with a .crt extension.
  #  certificate_path: "tls/localhost.crt"
  #  # Path to the private key file, typically named with a .key extension.
  #  private_key_path: "tls/localhost.key"
  #  # Path to the certificate authority file, typically named with a .crt extension.
  #  # When this field is provided client authentication will be enabled.
  #  #certificate_authority_path: "tls/localhost_CA.crt"
 
  # Timeout in seconds after which to terminate an idle connection. This field is optional, if not provided, idle connections will never be terminated.
  # timeout: 60

  # The transport that cassandra communication will occur over.
  # TCP is the only Cassandra protocol conforming transport.
  transport: Tcp
  
  # alternatively:
  #
  # Use the Cassandra protocol over WebSockets using a Shotover compatible driver.
  # transport: WebSocket

Redis

Redis:
  # The address to listen from
  listen_addr: "127.0.0.1:6379"

  # The number of concurrent connections the source will accept.
  connection_limit: 1000

  # Defines the behaviour that occurs when Once the configured connection limit is reached:
  # * when true: the connection is dropped.
  # * when false: the connection will wait until a connection can be made within the limit.
  hard_connection_limit: false

  # When this field is provided TLS is used when the client connects to Shotover.
  # Removing this field will disable TLS.
  #tls:
  #  # Path to the certificate file, typically named with a .crt extension.
  #  certificate_path: "tls/redis.crt"
  #  # Path to the private key file, typically named with a .key extension.
  #  private_key_path: "tls/redis.key"
  #  # Path to the certificate authority file typically named ca.crt.
  #  # When this field is provided client authentication will be enabled.
  #  #certificate_authority_path: "tls/ca.crt"
    
  # Timeout in seconds after which to terminate an idle connection. This field is optional, if not provided, idle connections will never be terminated.
  # timeout: 60

Transforms

Concepts

Sink

Sink transforms send data out of Shotover to some other service. This is the opposite of Shotover's sources, although sources are not transforms.

Terminating

Every transform chain must have exactly one terminating transform and it must be the final transform of the chain. This means that terminating transforms cannot pass messages onto another transform in the same chain. However some terminating transforms define their own sub-chains to allow further processing of messages.

Debug

Debug transforms can be temporarily used to test how your Shotover configuration performs. Dont forget to remove them when you are finished.

Implementation Status

  • Alpha - Should not be used in production.
  • Beta - Ready for use but is not battle tested.
  • Ready - Ready for use.

Future transforms won't be added to the public API while in alpha. But in these early days we have chosen to publish these alpha transforms to demonstrate the direction we want to take the project.

Transforms

TransformTerminatingImplementation Status
CassandraSinkClusterBeta
CassandraSinkSingleAlpha
CassandraPeersRewriteAlpha
CoalesceAlpha
TuneableConsistencyScatterAlpha
DebugPrinterAlpha
DebugReturnerAlpha
NullSinkBeta
ParallelMapAlpha
ProtectAlpha
QueryCounterAlpha
QueryTypeFilterAlpha
RedisCacheAlpha
RedisClusterPortsRewriteBeta
RedisSinkClusterBeta
RedisSinkSingleBeta
RedisTimestampTaggerAlpha
TeeAlpha
RequestThrottlingAlpha

CassandraSinkCluster

This transform will route Cassandra messages to a node within a Cassandra cluster based on:

  • a configured data_center and rack
  • token aware routing

The fact that Shotover is routing to multiple destination nodes will be hidden from the client. Instead Shotover will pretend to be either a single Cassandra node or part of a cluster of Cassandra nodes consisting entirely of Shotover instances.

This is achieved by rewriting system.local and system.peers/system.peers_v2 query results. The system.local will make Shotover appear to be its own node. While system.peers/system.peers_v2 will be rewritten to list the configured Shotover peers as the only other nodes in the cluster.

- CassandraSinkCluster:
    # contact points must be within the configured data_center and rack.
    # If this is not followed, Shotover will still function correctly but Shotover will communicate with a
    # node outside of the specified data_center and rack.
    first_contact_points: ["172.16.1.2:9042", "172.16.1.3:9042"]

    # A list of every Shotover node that will be proxying to the same Cassandra cluster.
    # This field should be identical for all Shotover nodes proxying to the same Cassandra cluster.
    shotover_nodes:
        # Address of the Shotover node.
        # This is usually the same address as the Shotover source that is connected to this sink.
        # But it may be different if you want Shotover to report a different address.
      - address: "127.0.0.1:9042"
        # The data_center this Shotover node will report as and route messages to.
        # For performance reasons, Shotover should be physically located in this data_center.
        data_center: "dc1"
        # The rack this Shotover node will report as and route messages to.
        # For performance reasons, Shotover should be physically located in this rack.
        rack: "rack1"
        # The host_id that Shotover will report as.
        # Does not affect message routing.
        # Make sure to set this to a unique value for each Shotover node, maybe copy one from: https://wasteaguid.info
        host_id: "2dd022d6-2937-4754-89d6-02d2933a8f7a"

      # If you only have a single Shotover instance then you only want a single node.
      # Otherwise if you have multiple Shotover instances then add more nodes e.g.
      #- address: "127.0.0.2:9042"
      #  data_center: "dc1"
      #  rack: "rack2"
      #  host_id: "3c3c4e2d-ba74-4f76-b52e-fb5bcee6a9f4"
      #- address: "127.0.0.3:9042"
      #  data_center: "dc2"
      #  rack: "rack1"
      #  host_id: "fa74d7ec-1223-472b-97de-04a32ccdb70b"

    # Defines which entry in shotover_nodes this Shotover instance will become.
    # This affects:
    # * the shotover_nodes data_center and rack fields are used for routing messages
    #     + Shotover will never route messages outside of the specified data_center
    #     + Shotover will always prefer to route messages to the specified rack
    #         but may route outside of the rack when nodes in the rack are unreachable
    # * which shotover_nodes entry is included in system.local and excluded from system.peers
    local_shotover_host_id: "2dd022d6-2937-4754-89d6-02d2933a8f7a"

    # Number of milliseconds to wait for a connection to be created to a destination cassandra instance.
    # If the timeout is exceeded then connection to another node is attempted
    # If all known nodes have resulted in connection timeouts an error will be returned to the client.
    connect_timeout_ms: 3000

    # When this field is provided TLS is used when connecting to the remote address.
    # Removing this field will disable TLS.
    #tls:
    #  # Path to the certificate authority file, typically named with a .crt extension.
    #  certificate_authority_path: "tls/localhost_CA.crt"
    #  # Path to the certificate file, typically named with a .crt extension.
    #  certificate_path: "tls/localhost.crt"
    #  # Path to the private key file, typically named with a .key extension.
    #  private_key_path: "tls/localhost.key"
    #  # Enable/disable verifying the hostname of the certificate provided by the destination.
    #  #verify_hostname: true

  # Timeout in seconds after which to give up waiting for a response from the destination.
  # This field is optional, if not provided, timeout will never occur.
  # When a timeout occurs the connection to the client is immediately closed.
  # read_timeout: 60

Error handling

If Shotover sends a request to a node and never gets a response, (maybe the node went down), Shotover will return a Cassandra Server error to the client. This is because the message may or may not have succeded, so only the client can attempt to retry as the retry may involve checking if the original query did in fact complete succesfully.

If no nodes are capable of receiving the query then Shotover will return a Cassandra Overloaded error indicating that the client should retry the query at some point.

All other connection errors will be handled internally by Shotover. And all Cassandra errors will be passed directly back to the client.

Metrics

This transfrom emits a metrics counter named failed_requests and the labels transform defined as CassandraSinkCluster and chain as the name of the chain that this transform is in.

CassandraSinkSingle

This transform will send/receive Cassandra messages to a single Cassandra node. This will just pass the query directly to the remote node. No cluster discovery or routing occurs with this transform.

- CassandraSinkSingle:
    # The IP address and port of the upstream Cassandra node/service.
    remote_address: "127.0.0.1:9042"

    # Number of milliseconds to wait for a connection to be created to the destination cassandra instance.
    # If the timeout is exceeded then an error is returned to the client.
    connect_timeout_ms: 3000

    # When this field is provided TLS is used when connecting to the remote address.
    # Removing this field will disable TLS.
    #tls:
    #  # Path to the certificate authority file, typically named with a .crt extension.
    #  certificate_authority_path: "tls/localhost_CA.crt"
    #  # Path to the certificate file, typically named with a .crt extension.
    #  certificate_path: "tls/localhost.crt"
    #  # Path to the private key file, typically named with a .key extension.
    #  private_key_path: "tls/localhost.key"
    #  # Enable/disable verifying the hostname of the certificate provided by the destination.
    #  #verify_hostname: true

  # Timeout in seconds after which to give up waiting for a response from the destination.
  # This field is optional, if not provided, timeout will never occur.
  # When a timeout occurs the connection to the client is immediately closed.
  # read_timeout: 60

This transfrom emits a metrics counter named failed_requests and the labels transform defined as CassandraSinkSingle and chain as the name of the chain that this transform is in.

CassandraPeersRewrite

This transform should be used with the CassandraSinkSingle transform. It will write over the ports of the peers returned by queries to the system.peers_v2 table in Cassandra with a user supplied value (typically the port that Shotover is listening on so Cassandra drivers will connect to Shotover instead of the Cassandra nodes themselves).

- CassandraPeersRewrite:
    # rewrite the peer ports to 9043
    port: 9043

Coalesce

This transform holds onto messages until some requirement is met and then sends them batched together. Validation will fail if none of the flush_when_ fields are provided, as this would otherwise result in a Coalesce transform that never flushes.

- Coalesce:
    # When this field is provided a flush will occur when the specified number of messages are currently held in the buffer.
    flush_when_buffered_message_count: 2000

    # When this field is provided a flush will occur when the following occurs in sequence:
    # 1. the specified number of milliseconds have passed since the last flush ocurred
    # 2. a new message is received
    flush_when_millis_since_last_flush: 10000

TuneableConsistencyScatter

This transform implements a distributed eventual consistency mechanism between the set of defined sub-chains. This transform will wait for a user configurable number of chains to return an OK response before returning the value up-chain. This follows a similar model as used by Cassandra for its consistency model. Strong consistency can be achieved when W + R > RF. In this case RF is always the number of chains in the route_map.

No sharding occurs within this transform and all requests/messages are sent to all routes.

Upon receiving the configured number of responses, the transform will attempt to resolve or unify the response based on metadata about the result. Currently it will try to return the newest response based on a metadata timestamp (last write wins) or it will simply return the largest response if no timestamp information is available.

- TuneableConsistencyScatter:
    # The number of chains to wait for a "write" response on.
    write_consistency: 2
    # The number of chains to wait for a "read" response on.
    read_consistency: 2
    # A map of named chains. All chains will be used in each request.
    route_map:
      instance1:
        - CassandraSinkSingle:
            remote_address: "127.0.0.1:9043"
      instance2:
        - CassandraSinkSingle:
            remote_address: "127.1.0.2:9043"
      instance3:
        - CassandraSinkSingle:
            remote_address: "127.2.0.3:9043"

DebugPrinter

This transform will log the query/message at an info level, then call the down-chain transform.

- DebugPrinter

DebugReturner

This transform will drop any messages it receives and return the supplied response.

- DebugReturner
    # return a Redis response
    Redis: "42"
   
    # To intentionally fail, use this variant 
    # Fail

NullSink

This transform will drop any messages it receives and return an empty response.

- NullSink

ParallelMap

This transform will send messages in a single batch in parallel across multiple instances of the chain.

If we have a parallelism of 3 then we would have 3 instances of the chain: C1, C2, C3. If the batch then contains messages M1, M2, M3, M4. Then the messages would be sent as follows:

  • M1 would be sent to C1
  • M2 would be sent to C2
  • M3 would be sent to C3
  • M4 would be sent to C1
- ParallelMap:
    # Number of duplicate chains to send messages through.
    parallelism: 1
    # if true then responses will be returned in the same as order as the queries went out.
    # if it is false then response may return in any order.
    ordered_results: true
    # The chain that messages are sent through
    chain:
      - QueryCounter:
          name: "DR chain"
      - RedisSinkSingle:
          remote_address: "127.0.0.1:6379"
          connect_timeout_ms: 3000

Protect

This transform will encrypt specific fields before passing them down-chain, it will also decrypt those same fields from a response. The transform will create a data encryption key on an user defined basis (e.g. per primary key, per value, per table etc).

The data encryption key is encrypted by a key encryption key and persisted alongside the encrypted value (alongside other needed cryptographic material). This transform provides the basis for in-application cryptography with unified key management between datastores. The encrypted value is serialised using bincode and should then be written to a blob field by a down-chain transform.

Fields are protected using ChaCha20-Poly1305. Modification of the field is also detected and raised as an error. DEK protection is dependent on the key manager being used.

Local

- Protect:
    # A key_manager config that configures the protect transform with how to look up keys.
    key_manager:
      Local: 
        kek: Ht8M1nDO/7fay+cft71M2Xy7j30EnLAsA84hSUMCm1k=
        kek_id: ""

    # A mapping of keyspaces, tables and columns to encrypt.
    keyspace_table_columns:
      test_protect_keyspace:
        test_table:
          - col1

AWS

- Protect:
    # A key_manager config that configures the protect transform with how to look up keys.
    key_manager:
      AWSKms:
        endpoint: "http://localhost:5000"
        region: "us-east-1"
        cmk_id: "alias/aws/secretsmanager"
        number_of_bytes: 32
            
    # A mapping of keyspaces, tables and columns to encrypt.
    keyspace_table_columns:
      test_protect_keyspace:
        test_table:
          - col1

Note: Currently the data encryption key ID function is just defined as a static string, this will be replaced by a user defined script shortly.

QueryCounter

This transform will log the queries that pass through it. The log can be accessed via the Shotover metrics

- QueryCounter:
    # this name will be logged with the query count
    name: "DR chain"

This transform emits a metrics counter named query_count with the label name defined as the name from the config, in the example it will be DR chain.

QueryTypeFilter

This transform will drop messages that match the specified filter.

- QueryTypeFilter:
    # drop messages that are read
    filter: Read

    # alternatively:
    #
    # drop messages that are write
    # filter: Write
    #
    # drop messages that are read write
    # filter: ReadWrite
    #
    # drop messages that are schema changes
    # filter: SchemaChange
    #
    # drop messages that are pub sub messages
    # filter: PubSubMessage

RedisCache

This transform will attempt to cache values for a given primary key in a Redis hash set. It is a primarily implemented as a read behind cache. It currently expects an SQL based AST to figure out what to cache (e.g. CQL, PGSQL) and updates to the cache and the backing datastore are performed sequentially.

- RedisCache:
    caching_schema:
      test:
        partition_key: [test]
        range_key: [test]
    chain:
      # The chain can contain anything but must end in a Redis sink
      - RedisSinkSingle:
          # The IP address and port of the upstream redis node/service.
          remote_address: "127.0.0.1:6379"
          connect_timeout_ms: 3000

RedisClusterPortsRewrite

This transform should be used with the RedisSinkCluster transform. It will write over the ports of the nodes returned by CLUSTER SLOTS or CLUSTER NODES with a user supplied value (typically the port that Shotover is listening on so cluster aware Redis drivers will direct traffic through Shotover instead of the nodes themselves).

- RedisClusterPortsRewrite:
    # rewrite the ports returned by `CLUSTER SLOTS` and `CLUSTER NODES` to use this port.
    new_port: 6380

RedisSinkCluster

This transform is a full featured Redis driver that will connect to a Redis cluster and handle all discovery, sharding and routing operations.

- RedisSinkCluster:
    # A list of IP address and ports of the upstream redis nodes/services.
    first_contact_points: ["127.0.0.1:2220", "127.0.0.1:2221", "127.0.0.1:2222", "127.0.0.1:2223", "127.0.0.1:2224", "127.0.0.1:2225"]

    # By default RedisSinkCluster will attempt to emulate a single non-clustered redis node by completely hiding the fact that redis is a cluster.
    # However, when this field is provided, this cluster hiding is disabled.
    # Instead other nodes in the cluster will only be accessed when performing a command that accesses a slot.
    # All other commands will be passed directly to the direct_connection node.
    # direct_connection: "127.0.0.1:2220"

    # The number of connections in the connection pool for each node.
    # e.g. if connection_count is 4 and there are 4 nodes there will be a total of 16 connections.
    # When this field is not provided connection_count defaults to 1.
    connection_count: 1

    # Number of milliseconds to wait for a connection to be created to a destination redis instance.
    # If the timeout is exceeded then connection to another node is attempted
    # If all known nodes have resulted in connection timeouts an error will be returned to the client.
    connect_timeout_ms: 3000

    # When this field is provided TLS is used when connecting to the remote address.
    # Removing this field will disable TLS.
    #tls:
    #  # Path to the certificate authority file, typically named ca.crt.
    #  certificate_authority_path: "tls/ca.crt"
    #  # Path to the certificate file, typically named with a .crt extension.
    #  certificate_path: "tls/redis.crt"
    #  # Path to the private key file, typically named with a .key extension.
    #  private_key_path: "tls/redis.key"
    #  # Enable/disable verifying the hostname of the certificate provided by the destination.
    #  #verify_hostname: true

Unlike other Redis cluster drivers, this transform does support pipelining. It does however turn each command from the pipeline into a group of requests split between the master Redis node that owns them, buffering results as within different Redis nodes as needed. This is done sequentially and there is room to make this transform split requests between master nodes in a more concurrent manner.

Latency and throughput will be different from pipelining with a single Redis node, but not by much.

This transfrom emits a metrics counter named failed_requests and the labels transform defined as RedisSinkCluster and chain as the name of the chain that this transform is in.

Differences to real Redis

On an existing authenticated connection, a failed auth attempt will not "unauthenticate" the user. This behaviour matches Redis 6 but is different to Redis 5.

Completeness

Note: Currently RedisSinkcluster does not support the following functionality:

  • Redis Transactions
  • Scan based operations e.g. SSCAN

RedisSinkSingle

This transform will take a query, serialise it into a RESP2 compatible format and send to the Redis compatible database at the defined address.

- RedisSinkSingle:
    # The IP address and port of the upstream redis node/service.
    remote_address: "127.0.0.1:6379"

    # Number of milliseconds to wait for a connection to be created to the destination redis instance.
    # If the timeout is exceeded then an error is returned to the client.
    connect_timeout_ms: 3000

    # When this field is provided TLS is used when connecting to the remote address.
    # Removing this field will disable TLS.
    #tls:
    #  # Path to the certificate authority file, typically named ca.crt.
    #  certificate_authority_path: "tls/ca.crt"
    #  # Path to the certificate file, typically named with a .crt extension.
    #  certificate_path: "tls/redis.crt"
    #  # Path to the private key file, typically named with a .key extension.
    #  private_key_path: "tls/redis.key"
    #  # Enable/disable verifying the hostname of the certificate provided by the destination.
    #  #verify_hostname: true

Note: this will just pass the query to the remote node. No cluster discovery or routing occurs with this transform.

This transfrom emits a metrics counter named failed_requests and the labels transform defined as RedisSinkSingle and chain as the name of the chain that this transform is in.

RedisTimestampTagger

A transform that wraps each Redis command in a Lua script that also fetches the key for the operations idletime. This is then used to build a last modified timestamp and insert it into a response's timestamp. The response from the Lua operation is unwrapped and returned to up-chain transforms looking like a normal Redis response.

This is mainly used in conjunction with the TuneableConsistencyScatter transform to enable a Cassandra style consistency model within Redis.

- RedisTimestampTagger

Tee

This transform sends messages to both the defined sub chain and the remaining down-chain transforms. The response from the down-chain transform is returned back up-chain but various behaviours can be defined by the behaviour field to handle the case when the responses from the sub chain and down-chain do not match.

- Tee:
    # Ignore responses returned by the sub chain
    behavior: Ignore

    # Alternatively:
    #
    # If the responses returned by the sub chain do not equal the responses returned by down-chain then return an error.
    # behavior: FailOnMismatch
    #
    # If the responses returned by the sub chain do not equal the responses returned by down-chain,
    # then the original message is also sent down the SubchainOnMismatch sub chain.
    # This is useful for logging failed messages.
    # behavior: 
    #   SubchainOnMismatch:
    #     - QueryTypeFilter:
    #         filter: Read
    #     - NullSink

    # Timeout for sending to the sub chain in microseconds
    timeout_micros: 1000
    # The number of message batches that the tee can hold onto in its buffer of messages to send.
    # If they arent sent quickly enough and the buffer is full then tee will drop new incoming messages.
    buffer_size: 10000
    # The sub chain to send duplicate messages through
    chain:
      - QueryTypeFilter:
          filter: Read
      - NullSink

This transfrom emits a metrics counter named tee_dropped_messages and the label chain as Tee.

RequestThrottling

This transform will backpressure requests to Shotover, ensuring that throughput does not exceed the max_requests_per_second value.max_requests_per_second has a minimum allowed value of 50 to ensure that drivers such as Cassandra are able to complete their startup procedure correctly. In Shotover, a "request" is counted as a query/statement to upstream service. In Cassandra, the list of queries in a BATCH statement are each counted as individual queries. It uses a Generic Cell Rate Algorithm.

- RequestThrottling
    max_requests_per_second: 20000

Redis Clustering

The following guide shows you how to configure Shotover Proxy to support transparently proxying Redis cluster unaware clients to a Redis cluster.

General Configuration

First you need to setup a Redis cluster and Shotover.

The easiest way to do this is with this example docker-compose.yaml You should first inspect the docker-compose.yaml to understand what the cluster looks like and how its exposed to the network.

Then run:

curl -L https://raw.githubusercontent.com/shotover/shotover-examples/main/redis-cluster-1-many/docker-compose.yaml --output docker-compose.yaml

Alternatively you could spin up a hosted Redis cluster on any cloud provider that provides it. This more accurately reflects a real production use but will take a bit more setup. And reduce the docker-compose.yaml to just the shotover part

version: '3.3'
services:
  shotover-0:
    networks:
      cluster_subnet:
        ipv4_address: 172.16.1.9
    image: shotover/shotover-proxy:v0.1.10
    volumes:
      - .:/config
networks:
  cluster_subnet:
    name: cluster_subnet
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.16.1.0/24
          gateway: 172.16.1.1

Shotover Configuration

---
sources:
  redis_prod:
    # define how shotover listens for incoming connections from our client application (`redis-benchmark`).
    Redis:
      listen_addr: "0.0.0.0:6379"
chain_config:
  redis_chain:
    # configure Shotover to connect to the Redis cluster via our defined contact points
    - RedisSinkCluster:
        first_contact_points:
          - "172.16.1.2:6379"
          - "172.16.1.3:6379"
          - "172.16.1.4:6379"
          - "172.16.1.5:6379"
          - "172.16.1.6:6379"
          - "172.16.1.7:6379"
        connect_timeout_ms: 3000
source_to_chain_mapping:
  # connect our Redis source to our Redis cluster sink (transform).
  redis_prod: redis_chain

Modify an existing topology.yaml or create a new one and place the above example as the file's contents.

If you didnt use the standard docker-compose.yaml setup then you will need to change first_contact_points to point to the Redis instances you used.

You will also need a config.yaml to run Shotover.

curl -L https://raw.githubusercontent.com/shotover/shotover-examples/main/redis-cluster-1-1/config.yaml --output config.yaml

Starting

We can now start the services with:

docker-compose up -d

Testing

With your Redis Cluster and Shotover now up and running, we can test out our client application. Let's start it up!

redis-benchmark -h 172.16.1.9 -t set,get

Running against local containerised Redis instances on a Ryzen 9 3900X we get the following:

user@demo ~$ redis-benchmark -t set,get
====== SET ======                                                     
  100000 requests completed in 0.69 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 
  host configuration "appendonly": 
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.079 milliseconds (cumulative count 2)
50.000% <= 0.215 milliseconds (cumulative count 51352)
75.000% <= 0.231 milliseconds (cumulative count 79466)
87.500% <= 0.247 milliseconds (cumulative count 91677)
93.750% <= 0.255 milliseconds (cumulative count 94319)
96.875% <= 0.271 milliseconds (cumulative count 97011)
98.438% <= 0.303 milliseconds (cumulative count 98471)
99.219% <= 0.495 milliseconds (cumulative count 99222)
99.609% <= 0.615 milliseconds (cumulative count 99613)
99.805% <= 0.719 milliseconds (cumulative count 99806)
99.902% <= 0.791 milliseconds (cumulative count 99908)
99.951% <= 0.919 milliseconds (cumulative count 99959)
99.976% <= 0.967 milliseconds (cumulative count 99976)
99.988% <= 0.991 milliseconds (cumulative count 99992)
99.994% <= 1.007 milliseconds (cumulative count 99995)
99.997% <= 1.015 milliseconds (cumulative count 99998)
99.998% <= 1.023 milliseconds (cumulative count 99999)
99.999% <= 1.031 milliseconds (cumulative count 100000)
100.000% <= 1.031 milliseconds (cumulative count 100000)

Cumulative distribution of latencies:
0.007% <= 0.103 milliseconds (cumulative count 7)
33.204% <= 0.207 milliseconds (cumulative count 33204)
98.471% <= 0.303 milliseconds (cumulative count 98471)
99.044% <= 0.407 milliseconds (cumulative count 99044)
99.236% <= 0.503 milliseconds (cumulative count 99236)
99.571% <= 0.607 milliseconds (cumulative count 99571)
99.793% <= 0.703 milliseconds (cumulative count 99793)
99.926% <= 0.807 milliseconds (cumulative count 99926)
99.949% <= 0.903 milliseconds (cumulative count 99949)
99.995% <= 1.007 milliseconds (cumulative count 99995)
100.000% <= 1.103 milliseconds (cumulative count 100000)

Summary:
  throughput summary: 144092.22 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.222     0.072     0.215     0.263     0.391     1.031
====== GET ======                                                     
  100000 requests completed in 0.69 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 
  host configuration "appendonly": 
  multi-thread: no

Latency by percentile distribution:
0.000% <= 0.079 milliseconds (cumulative count 1)
50.000% <= 0.215 milliseconds (cumulative count 64586)
75.000% <= 0.223 milliseconds (cumulative count 77139)
87.500% <= 0.239 milliseconds (cumulative count 90521)
93.750% <= 0.255 milliseconds (cumulative count 94985)
96.875% <= 0.287 milliseconds (cumulative count 97262)
98.438% <= 0.311 milliseconds (cumulative count 98588)
99.219% <= 0.367 milliseconds (cumulative count 99232)
99.609% <= 0.495 milliseconds (cumulative count 99613)
99.805% <= 0.583 milliseconds (cumulative count 99808)
99.902% <= 0.631 milliseconds (cumulative count 99913)
99.951% <= 0.647 milliseconds (cumulative count 99955)
99.976% <= 0.663 milliseconds (cumulative count 99978)
99.988% <= 0.679 milliseconds (cumulative count 99990)
99.994% <= 0.703 milliseconds (cumulative count 99995)
99.997% <= 0.711 milliseconds (cumulative count 99997)
99.998% <= 0.751 milliseconds (cumulative count 99999)
99.999% <= 0.775 milliseconds (cumulative count 100000)
100.000% <= 0.775 milliseconds (cumulative count 100000)

Cumulative distribution of latencies:
0.009% <= 0.103 milliseconds (cumulative count 9)
48.520% <= 0.207 milliseconds (cumulative count 48520)
98.179% <= 0.303 milliseconds (cumulative count 98179)
99.358% <= 0.407 milliseconds (cumulative count 99358)
99.626% <= 0.503 milliseconds (cumulative count 99626)
99.867% <= 0.607 milliseconds (cumulative count 99867)
99.995% <= 0.703 milliseconds (cumulative count 99995)
100.000% <= 0.807 milliseconds (cumulative count 100000)

Summary:
  throughput summary: 143884.89 requests per second
  latency summary (msec):
          avg       min       p50       p95       p99       max
        0.214     0.072     0.215     0.263     0.335     0.775

Redis Clustering with cluster aware client

The following guide shows you how to configure Shotover to support proxying Redis cluster aware clients to Redis cluster.

Overview

In this example, we will be connecting to a Redis cluster that has the following topology:

  • 172.16.1.2:6379
  • 172.16.1.3:6379
  • 172.16.1.4:6379
  • 172.16.1.5:6379
  • 172.16.1.6:6379
  • 172.16.1.7:6379

Shotover will be deployed as a sidecar to each node in the Redis cluster, listening on 6380. Use the following docker-compose.yaml to run the Redis cluster and Shotover sidecars.

curl -L https://raw.githubusercontent.com/shotover/shotover-examples/main/redis-cluster-1-1/docker-compose.yaml --output docker-compose.yaml

Below we can see an example of a Redis node and it's Shotover sidecar. Notice they are running on the same network address (172.16.1.2) and the present directory is being mounted to allow Shotover to access the config and topology files.


redis-node-0:
  image: bitnami/redis-cluster:6.2.12-debian-11-r26
  networks:
    cluster_subnet:
      ipv4_address: 172.16.1.2
  environment:
    - 'ALLOW_EMPTY_PASSWORD=yes'
    - 'REDIS_NODES=redis-node-0 redis-node-1 redis-node-2'

shotover-0:
  restart: always
  depends_on:
    - redis-node-0
  image: shotover/shotover-proxy
  network_mode: "service:redis-node-0"
  volumes:
    - type: bind
      source: $PWD
      target: /config

In this example we will use redis-benchmark with cluster mode enabled as our Redis cluster aware client application.

Configuration

First we will modify our topology.yaml file to have a single Redis source. This will:

  • Define how Shotover listens for incoming connections from our client application (redis-benchmark).
  • Configure Shotover to connect to the Redis node via our defined remote address.
  • Configure Shotover to rewrite all Redis ports with our Shotover port when the cluster aware driver is talking to the cluster, through Shotover.
  • Connect our Redis Source to our Redis cluster sink (transform).
---
sources:
  redis_prod:
    Redis:
      listen_addr: "0.0.0.0:6380"
chain_config:
  redis_chain:
    - RedisClusterPortsRewrite:
         new_port: 6380
    - RedisSinkSingle:
        remote_address: "0.0.0.0:6379"
        connect_timeout_ms: 3000
source_to_chain_mapping:
  redis_prod: redis_chain

Modify an existing topology.yaml or create a new one and place the above example as the file's contents.

You will also need a config.yaml to run Shotover.

curl -L https://raw.githubusercontent.com/shotover/shotover-examples/main/redis-cluster-1-1/config.yaml --output config.yaml

Starting

We can now start the services with:

docker-compose up -d

Testing

With everything now up and running, we can test out our client application. Let's start it up!

First we will run redis-benchmark directly on our cluster.

redis-benchmark -h 172.16.1.2 -p 6379 -t set,get --cluster 

If everything works correctly you should see the following, along with the benchmark results which have been omitted for brevity. Notice all traffic is going through the Redis port on 6379.

Cluster has 3 master nodes:

Master 0: d5eaf45804215f80cfb661928c1a84e1da7406a9 172.16.1.3:6379
Master 1: d774cd063e430d34a71bceaab851d7744134e22f 172.16.1.2:6379
Master 2: 04b301f1b165d81d5fb86e50312e9cc4898cbcce 172.16.1.4:6379

Now run it again but on the Shotover port this time.

redis-benchmark -h 172.16.1.2 -p 6380 -t set,get --cluster 

You should see the following, notice that all traffic is going through Shotover on 6380 instead of the Redis port of 6379:

Cluster has 3 master nodes:

Master 0: 04b301f1b165d81d5fb86e50312e9cc4898cbcce 172.16.1.4:6380
Master 1: d5eaf45804215f80cfb661928c1a84e1da7406a9 172.16.1.3:6380
Master 2: d774cd063e430d34a71bceaab851d7744134e22f 172.16.1.2:6380

Cassandra Cluster

The following guide shows you how to configure Shotover with support for proxying to a Cassandra Cluster.

Overview

In this example, we will be connecting to a Cassandra cluster that has the following topology:

  • 172.16.1.2:9042
  • 172.16.1.3:9042
  • 172.16.1.4:9042

Rewriting the peer ports

Shotover will be deployed as a sidecar to each node in the Cassandra cluster, listening on 9043. Use the following docker-compose.yaml to run the Cassandra cluster and Shotover sidecars. In this example we want to ensure that all our traffic to Cassandra goes through Shotover.

curl -L https://raw.githubusercontent.com/shotover/shotover-examples/main/cassandra-1-1/docker-compose.yaml --output docker-compose.yaml

Below we can see an example of a Cassandra node and it's Shotover sidecar, notice that they are running on the same network address (172.16.1.2) and the present directory is being mounted to allow Shotover to access the config and topology files.

  cassandra-two:
    image: bitnami/cassandra:4.0
    networks:
      cassandra_subnet:
        ipv4_address: 172.16.1.3
    healthcheck: *healthcheck
    environment: *environment
    
  shotover-one:
    restart: always
    depends_on:
      - cassandra-two
    image: shotover/shotover-proxy
    network_mode: "service:cassandra-two"
    volumes:
      - type: bind
        source: $PWD
        target: /config

In this example we will use cqlsh to connect to our cluster.

Configuration

First we will create our topology.yaml file to have a single Cassandra source. This will:

  • Define how Shotover listens for incoming connections from our client (cqlsh).
  • Configure Shotover to connect to the Cassandra node via our defined remote address.
  • Configure Shotover to rewrite all Cassandra ports with our Shotover port when the client connects
  • Connect our Cassandra source to our Cassandra sink (transform).
---
sources:
  cassandra_prod:
    Cassandra:
      listen_addr: "0.0.0.0:9043"
chain_config:
  main_chain:
    - CassandraPeersRewrite:
        port: 9043
    - CassandraSinkSingle:
        remote_address: "127.0.0.1:9042"
        connect_timeout_ms: 3000
source_to_chain_mapping:
  cassandra_prod: main_chain

Modify an existing topology.yaml or create a new one and place the above example as the file's contents.

You will also need a config.yaml to run Shotover.

curl -L https://raw.githubusercontent.com/shotover/shotover-examples/main/cassandra-1-1/config.yaml --output config.yaml

Starting

We can now start the services with:

docker-compose up -d

Testing

With everything now up and running, we can test it out with our client. Let's start it up!

First we will run cqlsh directly on our cluster with the command:

cqlsh 172.16.1.2 9042 -u cassandra -p cassandra

and check the system.peers_v2 table with the following query:

SELECT peer, native_port FROM system.peers_v2;

You should see the following results returned:

 peer       | native_port
------------+-------------
 172.16.1.3 |        9042
 172.16.1.4 |        9042

Now run it again but on the Shotover port this time, run:

cqlsh 172.16.1.2 9043 -u cassandra -p cassandra

and use the same query again. You should see the following results returned, notice how the native_port column is now the Shotover port of 9043:

 peer       | native_port
------------+-------------
 172.16.1.3 |        9043
 172.16.1.4 |        9043

If everything has worked, you will be able to use Cassandra, with your connection going through Shotover!

Adding Rate Limiting

The next section of this tutorial will cover adding rate limiting to your Cassandra cluster with Shotover. We will add the RequestThrottling transform to our topology.yaml as shown below. This transform should go at the front of the chain to prevent any unnecessary operations from occurring if a query is going to be rate limited.

---
sources:
  cassandra_prod:
    Cassandra:
      listen_addr: "0.0.0.0:9043"
chain_config:
  main_chain:
    - RequestThrottling:
        max_requests_per_second: 40000
    - CassandraPeersRewrite:
        port: 9043
    - CassandraSinkSingle:
        remote_address: "127.0.0.1:9042"
        connect_timeout_ms: 3000
named_topics:
  testtopic: 5
source_to_chain_mapping:
  cassandra_prod: main_chain

In this example we will set your max_requests_per_second to 40,000. This will allow a max of 40,000 queries per second to go through this Shotover instance, across all connections.

After completing this step you can restart your cluster with docker-compose restart to enable rate limiting.

Contributing to Shotover

This guide contains tips and tricks for working on Shotover itself.

Configuring the Environment

Shotover is written in Rust, so make sure you have a rust toolchain installed. See the rustup site for a quick way to setup your Rust development environment.

Once you've installed Rust via Rustup (you should just be fine with the latest stable). You will need to install a few other tools needed to compile some of Shotover's dependencies.

Shotover requires the following in order to build:

  • gcc
  • g++
  • libssl-dev
  • pkg-config (Linux)

On Ubuntu you can install them via sudo apt-get install cmake gcc g++ libssl-dev pkg-config.

Installing Optional Tools and Libraries

Docker

While not required for building Shotover, installing docker and docker-compose will allow you to run Shotover's integration tests and also build the static libc version of Shotover. This setup script might work for you: curl -sSL https://get.docker.com/ | sudo sh Do not use the rootless install as many of our tests rely on the user created bridge networks having their interface exposed to the host, which rootless install does not support. Instead add your user to the docker group: usermod -aG docker $USER

Building Shotover

Now you can build Shotover by running cargo build. The executable will then be found in target/debug/shotover-proxy.

Building Shotover (release)

The way you build Shotover will dramatically impact performance. To build Shotover for deployment in production environments, for maximum performance or for any benchmarking use cargo build --release. The resulting executable will be found in target/release/shotover-proxy.

Running the Tests

The Cassandra tests require the Cassandra CPP driver.

Installing Cassandra CPP Driver

Upstream installation information and dependencies for the Cassandra CPP driver can be found here.

However that is likely unusable because datastax do not ship packages for modern ubuntu so we have our own script which will compile, package and install the driver on a modern ubuntu. So to install the driver on ubuntu just run the script at shotover-proxy/build/install_ubuntu_packages.sh.

Run Shotover tests

Shotover's tests suite must be run via nextest as we rely on its configuration to avoid running incompatible integration tests concurrently. To use nextest:

  1. Install nextest: cargo install nextest
  2. Then run the tests: cargo nextest run

The tests rely on configuration in tests/test-configs/, so if for example, you wanted to manually setup the services for the redis-passthrough test, you could run these commands in the shotover-proxy directory:

  • docker-compose -f shotover-proxy/tests/test-configs/redis-passthrough/docker-compose.yaml up
  • cargo run -- --topology-file tests/test-configs/redis-passthrough/topology.yaml

Submitting a PR

Before submitting a PR you can run the following to ensure it will pass CI:

  • Run cargo fmt
  • Run cargo clippy - Ensure you haven't introduced any warnings.
  • Run cargo build --all-targets - Ensure everything still builds and you haven't introduced any warnings.
  • Run cargo nextest run --all-features - All tests pass.