Explainable AI refers to methods and techniques in the application of artificial intelligence such that the results of the solution can be understood by human experts. It contrasts with the concept of the "black box" in machine learning and enables transparency.\n\n\n\n via Alejandro Barredo Arrieta et al., \n\n\n\n\n\n\n\nThe need for transparency could be seen in the increased interest of the researchers. In the chart above, one can see how the number of publications with XAI as keyword have seen a rise over the past five years.\n\n\n\nBefore we go further, there are two things that we should get out of our way: why XAI is different and what is the biggest outcome.\n\n\n\nInterpretability vs Explainability\n\n\n\nA model is said to be interpretable when the outcomes make sense to the user. Whereas, explainability of a model deals with the nitty-gritties of it. If the former deals with the \u2018what\u2019 part, then the latter deals with the \u2018why\u2019.\n\n\n\nFairness And Transparency\n\n\n\nAnother important objective of explainable AI is exposing the biases involved with the data and XAI methods are forecasted to be the road that leads to a fairer machine learning practice.\n\n\n\nSo, how can one implement XAI?\n\n\n\nThere are a few fundamental ways in which this can be done:\n\n\n\nText explanationVisual explanationExplanations by example\n\n\n\nOut of these three, the visual explanation is more popular. However, there still is a dearth in the implementation of XAI because not every algorithm is designed with explainability and transparency of results in mind. The objective often is to decrease loss, improve accuracy and be consistent with scale.\n\n\n\nWhat Makes It Difficult\n\n\n\nThough having explainability as a criterion sounds good, there are few hurdles that developers and practitioners have to deal with.\n\n\n\nPerformance tradeoff: \n\n\n\nThe first step to make things more explainable is to make the models simpler. This enables the segmentation of the processes. However, complex models are more flexible and can be used to scale in real-time. For example, recommendation engines cannot expect to have fixed constraints. So, as to maintain consistency in the performance of the model at higher dimensionalities, the complexity has to be imbibed.\n\n\n\nEstablishing the metrics:\n\n\n\nNow let\u2019s say that we have somehow figured out the tools to make models more explainable. But who gets to say that some model\u2019s action are explainable? What are the metrics that one needs to stick for overall acceptance? A metric that works for a developer might not work for a GDPR compliance manager. Though domain knowledge would clear the initial hiccups, the question of who gets to pick, still looms large.\n\n\n\nWhy Data Fusion Is A Big Deal\n\n\n\nData fusion techniques were initially developed to exploit the overlap in data from various sources for faster learning of a task. Data fusion techniques merge heterogeneous information to improve the performance of ML models. Japanese researchers insist that there is still a lack of active research between explainable AI and data fusion techniques.\n\n\n\nHowever, they speculate on a few approaches that might eventually lead to beneficial outcomes. For instance, in big data fusion, local models submit their split of data sources to a worker node. Later, this information is processed via the popular Reduce and Map steps. In other words, the complexity in the information is split, reduced and later mapped; a fine example of information fusion.\n\n\n\nThough data fusion and explainability have been kept apart, the advent of deep learning methods bridged the void between these two concepts. The way features are learned from the initial layers of a deep neural network is synonymous with many data fusion techniques and since explainability deals with decoding high dimensional data, this becomes an interesting pursuit.\n\n\n\nBringing up data fusion as a solution to the fore also sparks up discussion on privacy. For this end, federated learning has shown promising results. In federated learning, models learn by sharing the information locally amongst the nodes and ensuring leak-proof modelling.\n\n\n\nWhat Can Be Done?\n\n\n\nAn immediate solution is the one discussed above: Explainability through visualisation. Tensorflow has a what-if tool and then there is activation atlases of Google again. These are the easy routes for both outsiders and practitioners alike. Along with this, there are a bunch of other metrics that enable explainability.\n\n\n\nThese are solutions for those who have the ML pipeline in place. However, for firms that are starting new, the intent has to be there since the initial stages. According to the study, companies should balance cultural and organisational changes needed to enforce such responsibilities over processes with AI functionalities; along with the feasibility and compliance of the implementation of such principles with the IT assets, policies and resources already available at the company. \n\n\n\nThey believe that it is in the gradual process of rising corporate awareness around the principles and values of Responsible AI where XAI will make its place and create a huge impact.