Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Database Design and Modeling with Google Cloud
  • Table Of Contents Toc
  • Feedback & Rating feedback
Database Design and Modeling with Google Cloud

Database Design and Modeling with Google Cloud

By : Sukumaran
4.9 (7)
close
close
Database Design and Modeling with Google Cloud

Database Design and Modeling with Google Cloud

4.9 (7)
By: Sukumaran

Overview of this book

In the age of lightning-speed delivery, customers want everything developed, built, and delivered at high speed and at scale. Knowledge, design, and choice of database is critical in that journey, but there is no one-size-fits-all solution. This book serves as a comprehensive and practical guide for data professionals who want to design and model their databases efficiently. The book begins by taking you through business, technical, and design considerations for databases. Next, it takes you on an immersive structured database deep dive for both transactional and analytical real-world use cases using Cloud SQL, Spanner, and BigQuery. As you progress, you’ll explore semi-structured and unstructured database considerations with practical applications using Firestore, cloud storage, and more. You’ll also find insights into operational considerations for databases and the database design journey for taking your data to AI with Vertex AI APIs and generative AI examples. By the end of this book, you will be well-versed in designing and modeling data and databases for your applications using Google Cloud.
Table of Contents (18 chapters)
close
close
1
Part 1:Database Model: Business and Technical Design Considerations
4
Part 2:Structured Data
8
Part 3:Semi-Structured, Unstructured Data, and NoSQL Design
11
Part 4:DevOps and Databases
13
Part 5:Data to AI

Taking your data to AI

Now that we have taken our data on a journey through a sample ETL pipeline, let’s take it through one last step, which is to perform ML on the data output from the previous step, that is, tokenized words and their counts.

In this section, we will create a model to identify the context from the given list of words using word2vec and cosine similarity techniques. We will use the top 1,000 frequently occurring words (from the output of the previous step) to predict the context of the tokenized words generated from the pipeline we created in the previous section.

In this exercise, we will take the data we have generated through the pipeline as input data to the context prediction application we will build in Python. Don’t worry, I have kept the code simple to understand and very minimal, so we don’t spend hours explaining the steps and the code. Open a new Colab Notebook from https://colab.research.google.com/. Enter the code snippets in...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY