In previous articles, we highlighted the hidden risk of black box AI in enterprise, as well as the need to restore our ability to ask “why?” and “how?“. These discussions lead to a question that seems simple, but whose technical complexity can intimidate many people interested in AI: How do we make a machine explain its internal logic?

Most of the AI systems we use today are “black box” systems. We ask them to perform a task, and we receive an answer. The process in the middle is totally opaque. Explainable AI (XAI) aims to be one of the solutions that sheds light on these grey areas, ensuring optimal control over AI and its results. In practical terms, when you ask the AI to perform a task, it will still produce an answer, but you will also be able to understand the logic and reasoning that led to that result.

To date, two main approaches stand out for making AI explainable to everyone. The first consists of creating a transparent model from the start, while the second involves using a sort of “digital translator” to explain the answer or decision made by a complex model.

The “Glass Box” Models (Explainable by Design)

Some AI models are designed from the outset to be understood by their users. In contrast to Black Box models, we call these Glass Box or White Box models. They are like cooking recipes: anyone can read the ingredients and see how they are combined to produce the final result. Here are two examples of applications commonly found for Glass Box models:

  • Decision Trees: Think of this as a sophisticated “Choose Your Own Adventure” flow chart.
    • In Healthcare: To diagnose a potential stroke, a decision tree might follow a path: Patient age > 70? (Yes) → Sudden arm weakness? (Yes) → Impaired speech? (Yes) → Urgent intervention required. Doctors can follow every branch of this logic to validate the diagnosis.
  • Linear Regression: This acts like a weighted scale. Each piece of data has a specific weight that tips the scale toward a final outcome.
    • In Retail: An AI predicting customer churn (who might stop using a service) might assign a heavy weight to “Number of support tickets filed” and a lower weight to “Time since last login.” You can see exactly which factor had the most influence.

As you can see, these examples are relatively straightforward. It is very easy to trust these models, but there is a flip side. Glass Box models lose efficiency in more complex tasks, particularly when media content (images, video, audio) is involved. This is why Explainable AI must also provide solutions for understanding the most complex models.

The “Interpreters” (Explaining the Unexplainable)

When you use powerful “Black Box” models like Deep Learning, the algorithms are far too complex for a human to follow. It is a bit like trying to track millions of moving parts simultaneously. In this situation, we use Post-hoc tools to “interview” the AI after it makes a prediction.

LIME: The “Zoom-In” Method

LIME (Local Interpretable Model-agnostic Explanations) assumes that while a model might be too complex to explain globally, it can be approximated locally for one specific decision.

Imagine you have a Super-Consultant (the Black Box AI) who is incredibly right, but incredibly quiet. They look at thousands of data points and simply say: “Don’t hire this person” or “This patient has a 90% risk of heart disease.” When you ask them “Why?”, they don’t answer. To figure out their logic, you hire a Private Investigator (LIME).

How the Investigator (LIME) works: The investigator doesn’t try to understand the Super-Consultant’s entire brain. Instead, they “poke” the consultant with slightly different versions of the same case to see what changes their mind:

  • Original Case: A candidate with 10 years of experience, a PhD, but no leadership experience. Consultant says: “Don’t hire.”
  • Test Case A: What if they had 10 years experience, no PhD, and no leadership? Consultant still says: “Don’t hire.”
  • Test Case B: What if they had 10 years experience, a PhD, and one year of leadership? Consultant suddenly says: “Hire!”

LIME observes these changes and reports back: “For this specific candidate, the ‘PhD’ didn’t matter as much as the ‘Lack of Leadership’ did.”

SHAP: The Forensic Accountant of AI

If LIME is a detective, SHAP (SHapley Additive exPlanations) is an accountant. Based on “Game Theory,” it treats every piece of data—like your age, your location, or your energy usage—as a player in a game.

The Accountant doesn’t just look at the final “Hire” or “Don’t Hire” decision. They break the decision down into a mathematical “receipt,” showing exactly how many points each qualification added or subtracted from the candidate’s score.

The Case: A candidate with a PhD, 10 years of experience, but no leadership experience. The “passing score” for a hire is 70, but the candidate scored a 65.

The SHAP “Receipt”:

  • The Baseline (The Starting Point): Every candidate starts with an average score of 50.
  • The “Experience” Bonus: Having 10 years of experience added +15 points.
  • The “Education” Bonus: Having a PhD added +10 points.
  • The “Leadership” Penalty: Having zero leadership experience subtracted -10 points.

The Final Score: 50 + 15 + 10 – 10 = 65 (Result: Don’t Hire).

The Accountant (SHAP) reveals that the candidate wasn’t rejected because of their PhD or their years of work; they were rejected specifically because the “Leadership” penalty dragged their total score below the passing line.

Because SHAP provides a mathematically complete breakdown, it is becoming the preferred tool for Corporate Responsibility. It allows companies to prove to regulators that their AI isn’t discriminating or making “wild guesses,” but is weighing every factor fairly.

From Knowing “How” to Ensuring “Fair”

Understanding how XAI works is not an end in itself. Whether through the creation of Glass Box models or the use of Digital Interpreters like LIME and SHAP, the goal is to move toward reclaiming human agency in an increasingly automated world.

By shedding light on the Black Box, we transition from a state of blind faith in the models we use to making informed and confident decisions. We are no longer passive consumers of AI; we become its active auditors. For a business leader, this means having the certainty that a strategy is based on sound data. For a doctor, it means obtaining a verified and trustworthy diagnosis. For a candidate, it means knowing their application wasn’t rejected by a glitch or a bias.

Did this guide help you see through the “Black Box”? Share this article with a colleague who works with AI, and let’s start a conversation about why transparency is the new standard for ethical business.