Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Codeless Time Series Analysis with KNIME
  • Toc
  • feedback
Codeless Time Series Analysis with KNIME

Codeless Time Series Analysis with KNIME

By : KNIME AG , Corey Weisinger, Maarit Widmann, Daniele Tonini
4.8 (10)
close
Codeless Time Series Analysis with KNIME

Codeless Time Series Analysis with KNIME

4.8 (10)
By: KNIME AG , Corey Weisinger, Maarit Widmann, Daniele Tonini

Overview of this book

This book will take you on a practical journey, teaching you how to implement solutions for many use cases involving time series analysis techniques. This learning journey is organized in a crescendo of difficulty, starting from the easiest yet effective techniques applied to weather forecasting, then introducing ARIMA and its variations, moving on to machine learning for audio signal classification, training deep learning architectures to predict glucose levels and electrical energy demand, and ending with an approach to anomaly detection in IoT. There’s no time series analysis book without a solution for stock price predictions and you’ll find this use case at the end of the book, together with a few more demand prediction use cases that rely on the integration of KNIME Analytics Platform and other external tools. By the end of this time series book, you’ll have learned about popular time series analysis techniques and algorithms, KNIME Analytics Platform, its time series extension, and how to apply both to common use cases.
Table of Contents (20 chapters)
close
1
Part 1: Time Series Basics and KNIME Analytics Platform
7
Part 2: Building and Deploying a Forecasting Model
14
Part 3: Forecasting on Mixed Platforms

Questions

  1. Which of the following does not contribute to more efficient data processing in a cluster environment?
    1. Connecting to a cluster of several machines
    2. Using Parquet file format
    3. Executing via Spark
    4. Retrieving data locall.
  2. Performing Spark tasks in your workflows requires…
    1. Data as a Spark data frame
    2. A dedicated Spark node for the task
    3. A remote cluster
    4. Data in Parquet forma.
  3. Which of the following often determines the appropriate granularity of the historical data?
    1. The available resources for the computation
    2. The forecast horizon
    3. The forecasting algorithm
    4. The number of predictor column.
  4. Could you apply the same demand prediction model to forecast the trip count tomorrow?
    1. Yes, if the seasonality pattern is the same and there is no trend through the years.
    2. No, the model needs to be retrained as soon as enough historical data becomes available.
    3. Yes, if you increase the size of the seed data.
    4. No, because the model was trained on data with many outliers.
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete