Authors: Navya Tyagi & Esha Srivastava, Tyagi Lab, RMIT University, Melbourne, Australia Introduction We are aware that machine learning models are built upon vast amounts of pre-existing context and are constantly learning. The advent of machine learning is meant to assist humans in making complex decisions. However, there are still many misconceptions and uncertainties about how much trust one should place in machine learning when it comes to decision-making—especially when these decisions could impact an individual’s life. But what if there was a way to at least understand or backtrack how machine learning makes decisions? Hence, in this blog, we are going to discuss interpretability and explainability. We often come across these terms and may use them interchangeably. However, it is important to understand the distinction between them. Interpretability refers to converting the implicit information within a neural network into human-interpretable insights. Th...