While machine/deep learning approaches have undeniable advantages, the more the model is complex the more having explanations is mandatory to build trust on the outcomes and interpret the results, especially in medicine, healthcare and neuroscience fields. Such complexity leads to questions of trust, bias, and interpretability as machine/deep learning methods are often a ”black-box”. XAI was born to make the model behaviour comprehensible from humans, aiming at explaining how the model reached a specific outcome, how the features contributed, and to what extent the model is confident about the decision (uncertainty).
gloria.menegaz@univr.it
ilaria.boscologalazzo@univr.it
lorenza.brusini@univr.it
giorgio.dolci@univr.it