Posts

Showing posts from July, 2025

When is Automated Decision Making Legitimate?

Image
A book chapter summary by Esha Srivastava

Four Approaches to AI

  Four Approaches to Defining Artificial Intelligence Assoc Prof Sonika Tyagi Data Science & AI | RMIT School of Computational Technology| Australia When we ask  “What is AI?” , it turns out there isn’t just one answer. In fact, there are  four broad ways  to define and understand AI: based on whether we want machines to  think or act , and whether we want them to do so  like humans or like ideal rational agents . Let’s explore each of these four perspectives. 1. Acting Humanly: The Turing Test Approach One of the earliest and most well-known definitions of AI comes from  Alan Turing . In 1950, he proposed what we now call the  Turing Test , an operational test of a machine’s intelligence. Turing suggested that if a machine could carry out a conversation well enough to convince a human that it too was human, then we could say the machine was intelligent. In this view,  intelligent behavior  is about replicating  human-like perfo...

AI in Everything: First Post (reposted)

 What is AI? Assoc Prof Sonika Tyagi Data Science & AI | RMIT School of Computational Technology| Australia

Machine Learning: Interpretability vs Explainability

Image
 Authors: Navya Tyagi & Esha Srivastava, Tyagi Lab, RMIT University, Melbourne, Australia   Introduction   We are aware that machine learning models are built upon vast amounts of pre-existing context and are constantly learning. The advent of machine learning is meant to assist humans in making complex decisions. However, there are still many misconceptions and uncertainties about how much trust one should place in machine learning when it comes to decision-making—especially when these decisions could impact an individual’s life. But what if there was a way to at least understand or backtrack how machine learning makes decisions? Hence, in this blog, we are going to discuss interpretability and explainability. We often come across these terms and may use them interchangeably. However, it is important to understand the distinction between them. Interpretability refers to converting the implicit information within a neural network into human-interpretable insights. Th...