Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Data Processing with Optimus
  • Table Of Contents Toc
  • Feedback & Rating feedback
Data Processing with Optimus

Data Processing with Optimus

By : Dr. Argenis Leon , Luis Aguirre Contreras
4.8 (4)
close
close
Data Processing with Optimus

Data Processing with Optimus

4.8 (4)
By: Dr. Argenis Leon , Luis Aguirre Contreras

Overview of this book

Optimus is a Python library that works as a unified API for data cleaning, processing, and merging data. It can be used for handling small and big data on your local laptop or on remote clusters using CPUs or GPUs. The book begins by covering the internals of Optimus and how it works in tandem with the existing technologies to serve your data processing needs. You'll then learn how to use Optimus for loading and saving data from text data formats such as CSV and JSON files, exploring binary files such as Excel, and for columnar data processing with Parquet, Avro, and OCR. Next, you'll get to grips with the profiler and its data types - a unique feature of Optimus Dataframe that assists with data quality. You'll see how to use the plots available in Optimus such as histogram, frequency charts, and scatter and box plots, and understand how Optimus lets you connect to libraries such as Plotly and Altair. You'll also delve into advanced applications such as feature engineering, machine learning, cross-validation, and natural language processing functions and explore the advancements in Optimus. Finally, you'll learn how to create data cleaning and transformation functions and add a hypothetical new data processing engine with Optimus. By the end of this book, you'll be able to improve your data science workflow with Optimus easily.
Table of Contents (16 chapters)
close
close
1
Section 1: Getting Started with Optimus
4
Section 2: Optimus – Transform and Rollout
10
Section 3: Advanced Features of Optimus

Limitations

We are working hard to create a unified API from the most popular dataframe libraries. However, all the technologies are in different development stages. Many issues have been flying under the radar, but here, we want to highlight some of the most important ones.

Right now, the main limitations are as follows:

  • Creating a UDF for string processing in cuDF and Dask-cuDF since they are not supported yet: https://github.com/rapidsai/cudf/issues/7301.
  • cuDF, Dask-cuDF, and Vaex database connections are handled using Dask, which needs to load data as pandas dataframes and then convert them into the appropriate format based on a certain engine.
  • Regex cuDF support is limited. For example, it still can handle lowercase and uppercase characters at the same time: https://github.com/rapidsai/cudf/issues/5217.

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY