Using Redis: Software Architecture Overview

·

Redis Overview

Redis is primarily known as a go-to cache solution. It is blazingly fast, as it has a very limited set of IO operations and the primary source of data is RAM. Despite the reputation of “the cache”, Redis is widely used as a quick state. For example, if an organization has multiple services (or pods of the same service) and needs to convey stateful data to others, Redis can be used as well (e.g. a job is being processed, a specific data entry is currently locked, and shouldn’t be updated or read, etc). Unlike other databases, Redis keeps its state almost exclusively in RAM, with certain exceptions. On top of it, Redis has persistence features, but they aren’t exactly reliable (especially RDB) as it isn’t Redis’ core responsibility. Therefore, any Redis software architecture approach must rely on durable storage solutions to keep critical data (NoSQL, RDBMS).

You probably heard that Redis is single-threaded. This is only partially true, as Redis can have background threads for tasks that aren’t its core responsibility (user-facing API, all those GET, SET, etc). For example, the aforementioned persistence is handled entirely by a background thread. This gives a very convenient level of simplicity where you worry less about race conditions.

Redis Software Architecture Role

Conventionally, there are several roles where Redis is applicable. All of them utilize its simplistic design and fast operations.

Redis as a cache solution

A cache helps an application to get the eventual result of the application’s operation without any processing. A common implementation of a cache relies upon the idea that if two inputs are the same, they should produce almost identical outputs. For example, if a user requests a list of products in a specific category, another user that requests them with the same category_id likely will receive exactly the same output. Despite being the simplest and most common application, a software architect may decide where exactly the cache solution should be placed. It can be either between the outside network and the application, or, for example, right before the database if the query is known to be slow. Cache entries can have TTL or otherwise get deleted either manually or automatically according to cache invalidation strategies.