Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Responsible AI in the Enterprise
  • Table Of Contents Toc
  • Feedback & Rating feedback
Responsible AI in the Enterprise

Responsible AI in the Enterprise

By : Adnan Masood, Dawe
5 (8)
close
close
Responsible AI in the Enterprise

Responsible AI in the Enterprise

5 (8)
By: Adnan Masood, Dawe

Overview of this book

Responsible AI in the Enterprise is a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts of machine learning models, this book equips you with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance. Throughout the book, you’ll gain an understanding of FairLearn and InterpretML, along with Google What-If Tool, ML Fairness Gym, IBM AI 360 Fairness tool, and Aequitas. You’ll uncover various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance recommendations. You’ll gain practical insights into using AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Additionally, you’ll explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, while discovering how to use FairLearn for fairness assessment and bias mitigation. You’ll also learn to build explainable models using global and local feature summary, local surrogate model, Shapley values, anchors, and counterfactual explanations. By the end of this book, you’ll be well-equipped with tools and techniques to create transparent and accountable machine learning models.
Table of Contents (16 chapters)
close
close
1
Part 1: Bigot in the Machine – A Primer
4
Part 2: Enterprise Risk Observability Model Governance
9
Part 3: Explainable AI in Action

Summary

This chapter provided an overview of the importance of developing appropriate governance frameworks for AI. The issue of automating bias in AI is a critical concern that requires urgent attention. Without appropriate governance frameworks, we risk exacerbating these problems and perpetuating societal inequalities. In this chapter, we outlined key terminologies such as explainability, interpretability, fairness, explicability, safety, trustworthiness, and ethics that play an important role in developing effective AI governance frameworks. Developing effective governance frameworks requires a comprehensive understanding of these concepts and their interplay.

We also explored the issue of automating bias and how the network effect can exacerbate these problems. The chapter highlighted the need for explainability and offers a critique of “black-box apologetics,” which suggests that AI models should not be interpretable. Ultimately, the chapter makes a strong case for the importance of AI governance and the need to ensure that AI is developed and deployed in an ethical and responsible manner. This is crucial to build trust in AI and ensure that its impacts are aligned with our societal goals and values.

The next chapter is upon us, like a towel in the hands of a galactic hitchhiker, always ready for the next adventure.

bookmark search playlist font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY