EXPLAINABLE AI

 

ABSTRACT:

Deep learning has made a substantial contribution to artificial intelligence's recent progress. Deep learning approaches have significantly outperformed classic machine learning methods such as decision trees and support vector machines in a variety of prediction tasks. Deep neural networks (DNNs), on the other hand, are comparably bad at describing their inference processes and final outcomes, and both developers and consumers consider them as a black box. DNNs (deep neural networks) are sometimes referred to as "alchemy" rather than "science" at this point. Explainability and transparency of our AI systems are especially important for their users, people who are affected by AI decisions, and researchers and developers who create AI solutions in many real-world applications such as business decision, process optimization, medical diagnosis, and investment recommendation. Both the research community and industry have been focusing attention to explainability and explainable AI in recent years.  These cover the primary study fields and state-of-the-art approaches in recent years, starting with expert systems and traditional machine learning approaches and progressing to the most current progress in the context of modern deep learning.

HISTORY:

Symbolic reasoning systems such as MYCIN, GUIDON, SOPHIE, and PROTOS, which could represent, reason about, and explain their reasoning for diagnostic, educational, or machine-learning (explanation-based learning) purposes, were researched from the 1970s to the 1990s. MYCIN, a research prototype for identifying bloodstream bacteremia infections developed in the early 1970s, could explain which of its custom rules contributed to a diagnosis in a specific case Intelligent tutoring system research led to the development of systems like SOPHIE, which could operate as a 'articulate expert,' presenting problem-solving strategies in a way that students could understand, so they would know what to do next. Even though it ultimately relied on the SPICE circuit simulator, SOPHIE could explain the qualitative reasons behind its electronics troubleshooting. Similarly, GUIDON supplemented MYCIN's domain-level rules with educational rules to explain medical diagnosis strategy. Symbolic approaches to machine learning, particularly those that rely on explanation-based learning, such as PROTOS, explicitly relied on representations of explanations to both explain and gain new information.

INTRODUCTION:

Explainable AI is a set of tools and frameworks that are natively linked with a number of Google products and services to help you understand and interpret predictions provided by your machine learning models. You can use it to troubleshoot and improve model performance, as well as to help others understand how your models behave. You may also use the What-If Tool to visually study model behaviour and generate feature attributions for model predictions in Auto ML Tables, Big Query ML, and Vertex AI..

Explainable Artificial Intelligence (XAI) is a new research area in the science of AI (AI). XAI can explain how AI came up with a specific solution (for example, classification or object identification) and can also answer other "wh" inquiries. Traditional AI does not allow for this level of explainability. Explainability is vital in critical applications including military, health care, law and order, and autonomous driving cars, among others, where confidence and transparency are necessary. So far, a variety of XAI approaches have been developed specifically for this purpose. This is a broad overview of these techniques from the viewpoint of multimedia (text, image, audio, and video). The benefits and drawbacks of various strategies have been examined, as well as some suggestions for further research.


What is AI?

Artificial Intelligence (AI) is a branch of computer science that focuses on the creation of machine intelligences that allow them to function similarly to humans. Speech recognition, problem-solving, learning, and planning are just a few examples.

What is explainable AI?

Explainable artificial intelligence (XAI) is a set of processes and strategies that allow human users to understand and trust machine learning algorithms' results and output. The term "explainable AI" refers to a model's projected impact and potential biases.

AI vs. Explainable AI :


The Basics of Explainable AI:

Despite the common use of explainability research, clear definitions of explainable AI have yet to emerge. This definition captures a sense of the broad range of explanation types and consumers, and accepts that explainability techniques can be introduced to a system rather than necessarily baked in, for the purposes of this blog post. Academics, business leaders, and government officials have been researching the benefits of explainability and designing algorithms to solve a lot of scenarios. Explainability has been identified as a requirement for AI clinical decision support systems in the healthcare domain, for example, because the ability to interpret system outputs facilitates shared decision-making between medical professionals and patients and provides much-needed system transparency. Explanations of AI systems are used in finance to follow certain procedures as well as provide analysts with the data they need to audit high-risk choices.

APPLICATION OF EXPLAINABLE AI :

           Explainable AI can use in Healthcare field.

           It can use in Manufacturing, Autonomous vehicles, Loan approvals.

           Also it can be use in Defense, Fraud detection, Resume screening. 

ADVANTAGE OF EXPLAINABLE AI?

           There is a technique to reduce the cost of mistakes.

           Model biassing is also being reduced.

           There is Code Confidence and Compliance.

           Model performance is excellent.

           Making well-informed decisions

DISADVANTAGE OF EXPLAINABLE AI?

             The cost is excessive.

 It takes a lot of skill to build a machine that can simulate human intelligence.

 No creativity. A big disadvantage of AI is that it cannot learn to think outside the box. 

 Unemployment is on the rise.

 Make Humans Lazy.

           There is no such thing as ethics.

 

   CONCLUSION :

           We've learned what explainable AI is and why it's so important, as well as possible methods to getting closer to our goal.


REFERENCES

https://www.researchgate.net/publication/336131051_Explainable_AI_A_Brief_Survey_on_History_Research_Areas_Approaches_and_Challenges.

Vishwakarma institute of technology, Pune

Batch_3 Group_1

Under the guidance of: - Prof. Dipali Joshi. 


Group Members: -

NAMES

ROLL No.

SHRUTI DHADI

72

SAKSHI DHATRAK

74

SAURABH DOLHARKAR

77

RITESH GEDAM

79

PRATHAMESH KATKADE

92

DHARTI PATIL

99






Comments

  1. Great Information..... Nice Blog....

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Very informative... Great job allπŸ‘πŸ»

    ReplyDelete
  4. Very insightful and informativeπŸ‘

    ReplyDelete
  5. Great work.. Keep posting such informative content πŸ‘πŸ»

    ReplyDelete

Post a Comment