Posts

Showing posts from June, 2022

EXPLAINABLE AI

Image
  ABSTRACT: Deep learning has made a substantial contribution to artificial intelligence's recent progress. Deep learning approaches have significantly outperformed classic machine learning methods such as decision trees and support vector machines in a variety of prediction tasks. Deep neural networks (DNNs), on the other hand, are comparably bad at describing their inference processes and final outcomes, and both developers and consumers consider them as a black box. DNNs (deep neural networks) are sometimes referred to as "alchemy" rather than "science" at this point. Explainability and transparency of our AI systems are especially important for their users, people who are affected by AI decisions, and researchers and developers who create AI solutions in many real-world applications such as business decision, process optimization, medical diagnosis, and investment recommendation. Both the research community and industry have been focusing attention to explai