With advances in technology, Artificial Intelligence (AI) plays an increasingly important role in both academia and business. Machine Learning (ML), in particular, has been widely used as a tool to maximise profits, reduce costs, improve decision-making, and identify anomalies. It is noteworthy that both companies and scientific articles have focused on applying AI techniques to address these challenges more efficiently and accurately. Developments in Information Systems (IS) have enabled the creation of increasingly powerful approaches and technologies for data processing and storage (e.g., Big Data, Machine Learning, Predictive Analytics), positively impacting various sectors of the economy, such as healthcare, finance, digital marketing, and industry. With the increasing integration of AI into these systems, a new concern arises: how to ensure that automated decisions are understandable, justifiable, and ethical? This is where "explainable artificial intelligence" (XAI) comes in.
- What is Xplainable Artificial Intelligence (XAI)?
The concept of "explainability" originated in intelligent systems (in the 1980s) through Expert Systems, which sought to justify their decisions through "rule-based explanations." At the time, the main concern was making symbolic systems understandable to human experts, primarily in medical and industrial contexts. From 2010 onwards, "explainability" began to proliferate and consolidate in decision support systems. This is because it has become necessary to reinforce the "why" behind automated processes, which are often considered black-box. This lack of transparency gave rise to the field of eXplainable Artificial Intelligence (XAI), which seeks to make AI systems more understandable and interpretable. XAI aims to explain how and why a model arrives at a particular decision, promoting trust, ethics, and responsibility in the use of technology.
- XAI Techniques
There are several XAI techniques, which can be divided into two types: • Intrinsic models: these are naturally explainable, such as decision trees and linear regressions.
• Post-hoc methods: these are applied after training complex models (such as neural networks) to generate explanations about their behavior. Among the most used methods, the following stand out:
• Local Interpretable Model-agnostic Explanations (LIME) - creates simplified local models to explain individual predictions.
• SHapley Additive exPlanations (SHAP) - distributes the impact of each variable based on game theory. • Feature Importance and Partial Dependence Plots (PDP) – show the influence and behavior of variables on the model's outcome.
• What-if Scenarios: What-if analysis is a fundamental XAI technique that allows exploring how small changes in the data can affect a model's predictions, enabling an understanding of the model's internal behavior and identifying the sensitivity of variables in the final decision. An example of an XAI application can be found at this link.
- Impact generated
The application of eXplainable Artificial Intelligence (XAI) techniques and explainable analytics in Intelligent Decision Support Systems (IDSS) produces significant impacts in these key dimensions:
1. Impact on user trust and acceptance: explainability allows decision-makers to understand how and why a recommendation was generated, increasing trust in the systems and decreasing resistance to their adoption, especially in critical domains such as health (e.g., clinical triage), finance (e.g., credit assessment), and justice (e.g., recidivism risk assessment).
2. Impact on decision quality: local and global explanations assist in the validation, correction, and contextualization of recommendations. By identifying the determining factors of a decision (e.g., influential variables or outliers), decision-makers can adjust human interventions or business rules, promoting more robust and consistent decisions.
3. Ethical and regulatory impact: XAI facilitates transparency and auditability, supporting legal requirements (e.g., right to explanation). Inspection of explanations helps detect and mitigate biases and discrimination, making it possible to take corrective actions on data or models.
4. Strategic and organisational impact: Explanations make AI insights more actionable, improving communication between analysts, managers, and stakeholders and enabling the identification of optimisation opportunities (e.g., cost reduction, process reconfiguration, prioritisation of sensors in predictive maintenance).
Beyond these direct effects, XAI is increasingly being used as a means to improve AI's own response: explanations guide model debugging, feature selection and engineering, probability calibration, and data drift detection; they enable human-in-the-loop workflows where human feedback, supported by explanations, is used to retrain the model and continuously improve it.
eXplainable Artificial Intelligence (XAI) represents an essential step in the evolution of Intelligent Decision Support Systems (IDSS), promoting transparency, trust, and accountability in automated decisions. In a context where AI is increasingly crucial for organisational competitiveness and innovation, the ability to understand and justify its predictions has become a fundamental condition for its sustainable and ethical adoption.
Explainability is not just a technical requirement, but also a strategic element that strengthens the link between data analysis and human decision-making. Through techniques such as Feature Importance, SHAP, LIME, or what-if analyses, it is possible to understand the impact of variables, test alternative scenarios, and adjust models in an informed way. It also contributes to fairer, more effective decisions aligned with business objectives. Currently, this principle of explainability has become a constant practice in projects developed at the CCG/ZGDV Institute, where the integration of XAI mechanisms not only increases the robustness and transparency of models but also transforms analytical results into actionable knowledge for organisations. This approach reinforces CCG's role in promoting responsible innovation and developing intelligent solutions.



