Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Redis Stack for Application Modernization
  • Table Of Contents Toc
  • Feedback & Rating feedback
Redis Stack for Application Modernization

Redis Stack for Application Modernization

By : Luigi Fugaro, Ortensi
5 (2)
close
close
Redis Stack for Application Modernization

Redis Stack for Application Modernization

5 (2)
By: Luigi Fugaro, Ortensi

Overview of this book

In modern applications, efficiency in both operational and analytical aspects is paramount, demanding predictable performance across varied workloads. This book introduces you to Redis Stack, an extension of Redis and guides you through its broad data modeling capabilities. With practical examples of real-time queries and searches, you’ll explore Redis Stack’s new approach to providing a rich data modeling experience all within the same database server. You’ll learn how to model and search your data in the JSON and hash data types and work with features such as vector similarity search, which adds semantic search capabilities to your applications to search for similar texts, images, or audio files. The book also shows you how to use the probabilistic Bloom filters to efficiently resolve recurrent big data problems. As you uncover the strengths of Redis Stack as a data platform, you’ll explore use cases for managing database events and leveraging introduce stream processing features. Finally, you’ll see how Redis Stack seamlessly integrates into microservices architectures, completing the picture. By the end of this book, you’ll be equipped with best practices for administering and managing the server, ensuring scalability, high availability, data integrity, stored functions, and more.
Table of Contents (18 chapters)
close
close
1
Part 1: Introduction to Redis Stack
6
Part 2: Data Modeling
11
Part 3: From Development to Production

Compaction rules for Time Series

In Redis Stack for Time Series, a compaction rule is a mechanism used to downsample data points and reduce data storage requirements over time. As time-series data grows and accumulates, it often becomes less important to store high-resolution data for older timestamps. Compaction rules help to maintain a balance between data storage and resolution requirements.

A compaction rule is a user-defined policy that dictates how the data points should be aggregated over a given time period (e.g., every minute, hour, or day) and retained in a downsampled series. The rule can specify the aggregation method, such as average, minimum, maximum, sum, or count, among the others described in the Aggregation framework section of this chapter.

For example, you can set up a compaction rule to downsample data every 5 minutes using the average aggregation function. This rule would create a new time series key where each data point represents the average value of...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY