Warehouse Stock Clearance Sale

Grab a bargain today!


Sign Up for Fishpond's Best Deals Delivered to You Every Day
Go
Practical Explainable AI ­Using Python
Artificial Intelligence Model Explanations Using Python-Based Libraries, Extensions, and Frameworks

Rating
Format
Paperback, 344 pages
Published
United States, 1 December 2021

Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as Python XAI libraries, TensorFlow 2.0+, Keras, and custom frameworks using Python wrappers.



You'll begin with an introduction to model explainability and interpretability basics, ethical consideration, and biases in predictions generated by AI models. Next, you'll look at methods and systems to interpret linear, non-linear, and time-series models used in AI. The book will also cover topics ranging from interpreting to understanding how an AI algorithm makes a decision

Further, you will learn the most complex ensemble models, explainability, and interpretability using frameworks such as Lime, SHAP, Skater, ELI5, etc. Moving forward, you will be introduced to model explainability for unstructured data, classification problems, and natural language processing¿related tasks. Additionally, the book looks at counterfactual explanations for AI models. Practical Explainable AI Using Python shines the light on deep learning models, rule-based expert systems, and computer vision tasks using various XAI frameworks.
What You'll Learn
Review the different ways of making an AI model interpretable and explainable Examine the biasness and good ethical practices of AI models Quantify, visualize, and estimate reliability of AI models Design frameworks to unbox the black-box models Assess the fairness of AI models Understand the building blocks of trust in AI models Increase the level of AI adoption
Who This Book Is For
AI engineers, data scientists, and software developers involved in driving AI projects/ AI products.

Chapter 1: Introduction to Model Explainability and Interpretability


Chapter Goal: This chapter is to understand what is model explainability and interpretability using Python.

No of pages: 30-40 pages


Chapter 2: AI Ethics, Biasness and Reliability

Chapter Goal: This chapter aims at covering different frameworks using XAI Python libraries to control biasness, execute the principles of reliability and maintain ethics while generating predictions.

No of pages: 30-40


Chapter 3: Model Explainability for Linear Models Using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by linear models for supervised learning task, for structured data

No of pages : 30-40

Chapter 4: Model Explainability for Non-Linear Models using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by non-linear models, such as tree based models for supervised learning task, for structured data

No of pages: 30-40


Chapter 5: Model Explainability for Ensemble Models Using XAI Components
Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by ensemble models, such as tree based ensemble models for supervised learning task, for structured data

No of pages: 30-40


Chapter 6: Model Explainability for Time Series Models using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by time series models for structured data, both univariate time series model and multivariate time series model

No of pages: 30-40


Chapter 7: Model Explainability for Natural Language Processing using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by models from text classification, summarization, sentiment classification

No of pages: 30-40

Chapter 8: AI Model Fairness Using What-If Scenario

Chapter Goal: This chapter explains use of Google's WIT Tool and custom libraries to explain the fairness of an AI model

No of pages: 30-40


Chapter 9: Model Explainability for Deep Neural Network Models

Chapter Goal: This chapter explains use of Python libraries to interpret the neural network models and deep learning models such as LSTM models, CNN models etc. using smooth grad and deep shift

No of pages: 30-40


Chapter 10: Counterfactual Explanations for XAI models

Chapter Goal: This chapter aims at providing counterfactual explanations to explain predictions of individual instances. The "event" is the predicted outcome of an instance, the "cause" are the particular feature values of this instance that were the input to the model that "caused" a certain prediction.

No of pages: 30-40


Chapter 11: Contrastive Explanation for Machine Learning

Chapter Goal: In this chapter we will use foil trees: a model-agnostic approach to extracting explanations for finding the set of rules that causes the explanation to be predicted the actual outcome (fact) instead of the other (foil)

No of pages: 20-30

Chapter 12: Model-Agnostic Explanations By Identifying Prediction Invariance

Chapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.

No of pages: 20-30


Chapter 13: Model Explainability for Rule based Expert System

Chapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.

No of pages: 20-30


Chapter 14: Model Explainability for Computer Vision.

Chapter Goal: In this chapter we will use Python libraries to explain computer vision tasks such as object detection, image classification models.

No of pages: 20-30


Show more

Our Price
HK$463
Elsewhere
HK$553.11
Save HK$90.11 (16%)
Ships from UK Estimated delivery date: 6th May - 13th May from UK
Free Shipping Worldwide

Buy Together
+
Buy together with Explainable AI Recipes at a great price!
Buy Together
HK$721

Product Description

Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as Python XAI libraries, TensorFlow 2.0+, Keras, and custom frameworks using Python wrappers.



You'll begin with an introduction to model explainability and interpretability basics, ethical consideration, and biases in predictions generated by AI models. Next, you'll look at methods and systems to interpret linear, non-linear, and time-series models used in AI. The book will also cover topics ranging from interpreting to understanding how an AI algorithm makes a decision

Further, you will learn the most complex ensemble models, explainability, and interpretability using frameworks such as Lime, SHAP, Skater, ELI5, etc. Moving forward, you will be introduced to model explainability for unstructured data, classification problems, and natural language processing¿related tasks. Additionally, the book looks at counterfactual explanations for AI models. Practical Explainable AI Using Python shines the light on deep learning models, rule-based expert systems, and computer vision tasks using various XAI frameworks.
What You'll Learn
Review the different ways of making an AI model interpretable and explainable Examine the biasness and good ethical practices of AI models Quantify, visualize, and estimate reliability of AI models Design frameworks to unbox the black-box models Assess the fairness of AI models Understand the building blocks of trust in AI models Increase the level of AI adoption
Who This Book Is For
AI engineers, data scientists, and software developers involved in driving AI projects/ AI products.

Chapter 1: Introduction to Model Explainability and Interpretability


Chapter Goal: This chapter is to understand what is model explainability and interpretability using Python.

No of pages: 30-40 pages


Chapter 2: AI Ethics, Biasness and Reliability

Chapter Goal: This chapter aims at covering different frameworks using XAI Python libraries to control biasness, execute the principles of reliability and maintain ethics while generating predictions.

No of pages: 30-40


Chapter 3: Model Explainability for Linear Models Using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by linear models for supervised learning task, for structured data

No of pages : 30-40

Chapter 4: Model Explainability for Non-Linear Models using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by non-linear models, such as tree based models for supervised learning task, for structured data

No of pages: 30-40


Chapter 5: Model Explainability for Ensemble Models Using XAI Components
Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by ensemble models, such as tree based ensemble models for supervised learning task, for structured data

No of pages: 30-40


Chapter 6: Model Explainability for Time Series Models using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by time series models for structured data, both univariate time series model and multivariate time series model

No of pages: 30-40


Chapter 7: Model Explainability for Natural Language Processing using XAI Components

Chapter Goal: This chapter explains use of LIME, SKATER, SHAP and other libraries to explain the decisions made by models from text classification, summarization, sentiment classification

No of pages: 30-40

Chapter 8: AI Model Fairness Using What-If Scenario

Chapter Goal: This chapter explains use of Google's WIT Tool and custom libraries to explain the fairness of an AI model

No of pages: 30-40


Chapter 9: Model Explainability for Deep Neural Network Models

Chapter Goal: This chapter explains use of Python libraries to interpret the neural network models and deep learning models such as LSTM models, CNN models etc. using smooth grad and deep shift

No of pages: 30-40


Chapter 10: Counterfactual Explanations for XAI models

Chapter Goal: This chapter aims at providing counterfactual explanations to explain predictions of individual instances. The "event" is the predicted outcome of an instance, the "cause" are the particular feature values of this instance that were the input to the model that "caused" a certain prediction.

No of pages: 30-40


Chapter 11: Contrastive Explanation for Machine Learning

Chapter Goal: In this chapter we will use foil trees: a model-agnostic approach to extracting explanations for finding the set of rules that causes the explanation to be predicted the actual outcome (fact) instead of the other (foil)

No of pages: 20-30

Chapter 12: Model-Agnostic Explanations By Identifying Prediction Invariance

Chapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.

No of pages: 20-30


Chapter 13: Model Explainability for Rule based Expert System

Chapter Goal: In this chapter we will use anchor-LIME (a-LIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear.

No of pages: 20-30


Chapter 14: Model Explainability for Computer Vision.

Chapter Goal: In this chapter we will use Python libraries to explain computer vision tasks such as object detection, image classification models.

No of pages: 20-30


Show more
Product Details
EAN
9781484271575
ISBN
1484271572
Publisher
Other Information
Illustrated
Dimensions
25.4 x 17.8 x 1.9 centimeters (0.63 kg)

Table of Contents

Chapter 1:  Introduction to Model Explainability and Interpretability.- Chapter 2:  AI Ethics, Biasness and Reliability.- Chapter 3: Model Explainability for Linear Models Using XAI Components.- Chapter 4: Model Explainability for Non-Linear Models using XAI Components.- Chapter 5: Model Explainability for Ensemble Models Using XAI Components.- Chapter 6: Model Explainability for Time Series Models using XAI Components.- Chapter 7: Model Explainability for Natural Language Processing using XAI Components.- Chapter 8: AI Model Fairness Using What-If Scenario.- Chapter 9: Model Explainability for Deep Neural Network Models.- Chapter 10: Counterfactual Explanations for XAI models.- Chapter 11: Contrastive Explanation for Machine Learning.- Chapter 12: Model-Agnostic Explanations By Identifying Prediction Invariance.- Chapter 13: Model Explainability for Rule based Expert System.- Chapter 14: Model Explainability for Computer Vision. 

About the Author

Pradeepta Mishra is the Head of AI (Leni) at L&T Infotech (LTI), leading a large group of data scientists, computational linguistics experts, machine learning and deep learning experts in building next generation product, ‘Leni’ world’s first virtual data scientist. He was awarded as "India's Top - 40Under40DataScientists" by Analytics India Magazine. He is an author of 4 books, his first book has been recommended in HSLS center at the University of Pittsburgh, PA, USA. His latest book #PytorchRecipes was published by Apress. He has delivered a keynote session at the Global Data Science conference 2018, USA. He has delivered a TEDx talk on "Can Machines Think?", available on the official TEDx YouTube channel. He has delivered 200+ tech talks on data science, ML, DL, NLP, and AI in various Universities, meetups, technical institutions and community arranged forums. 

Reviews

“Practical explainable AI using Python combines textbook and cookbook elements. It provides explanations of concepts along with practical examples and exercises. … this book offers a comprehensive foundation that will remain relevant for some time. However, readers should supplement their knowledge with the latest research in order to stay up to date in this dynamic field.” (Gulustan Dogan, Computing Reviews, August 21, 2023)

“While the book presents just fundamental aspects, I find this to be a great advantage. Indeed, even the layperson to AI/ML can use this work: the author starts with the most basic definitions and models, and then provides software examples … . This way a very broad readership is possible, since more advanced parts of the chapters will be interesting even for specialists in AI/ML who would like to increase their expertise in the title topic.” (Piotr Cholda, Computing Reviews, April 17, 2023)

Show more
Review this Product
Ask a Question About this Product More...
 
Look for similar items by category
Item ships from and is sold by Fishpond World Ltd.

Back to top