Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Implementing Splunk 7, Third Edition
  • Table Of Contents Toc
  • Feedback & Rating feedback
Implementing Splunk 7, Third Edition

Implementing Splunk 7, Third Edition

5 (4)
close
close
Implementing Splunk 7, Third Edition

Implementing Splunk 7, Third Edition

5 (4)

Overview of this book

Splunk is the leading platform that fosters an efficient methodology and delivers ways to search, monitor, and analyze growing amounts of big data. This book will allow you to implement new services and utilize them to quickly and efficiently process machine-generated big data. We introduce you to all the new features, improvements, and offerings of Splunk 7. We cover the new modules of Splunk: Splunk Cloud and the Machine Learning Toolkit to ease data usage. Furthermore, you will learn to use search terms effectively with Boolean and grouping operators. You will learn not only how to modify your search to make your searches fast but also how to use wildcards efficiently. Later you will learn how to use stats to aggregate values, a chart to turn data, and a time chart to show values over time; you'll also work with fields and chart enhancements and learn how to create a data model with faster data model acceleration. Once this is done, you will learn about XML Dashboards, working with apps, building advanced dashboards, configuring and extending Splunk, advanced deployments, and more. Finally, we teach you how to use the Machine Learning Toolkit and best practices and tips to help you implement Splunk services effectively and efficiently. By the end of this book, you will have learned about the Splunk software as a whole and implemented Splunk services in your tasks at projects
Table of Contents (15 chapters)
close
close

Sizing indexers

There are a number of factors that affect how many Splunk indexers you will need, but starting with a model system with typical usage levels, the short answer is 100 gigabytes of raw logs per day per indexer. In the vast majority of cases, the disk is the performance bottleneck, except in the case of very slow processors.

The measurements mentioned next assume that you will spread the events across your indexers evenly, using the autoLB feature of the Splunk forwarder. We will talk more about this in indexer load balancing.

The model system looks like this:

  • 8 gigabytes of RAM.
  • If more memory is available, the operating system will use whatever Splunk does not use for the disk cache.
  • Eight fast physical processors. On a busy indexer, two cores will probably be busy most of the time, handling indexing tasks. It is worth noting the following:
    • More processors won...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY