Home Communication Artificial Intelligence beyond “ChatGPTs”
Artificial Intelligence beyond “ChatGPTs”
26 November, 2024

Nowadays, Artificial Intelligence (AI) is a hot topic, highlighted almost daily in many media outlets (e.g., newspapers, radio, television, social networks, among others). In particular, the European Parliament has highlighted the importance of this topic, as demonstrated by the publication of the first-ever regulation on the use of AI[1].

This widespread interest in the topic was driven by the launch and free availability of ChatGPT (and similar ones – Llama and Gemini, among others), which, without a doubt, revolutionized this area. However, without discrediting its relevance, AI is more than generative tools (i.e., focused on creating content, such as text or images) and chatbots (i.e., computer programs that simulate conversations). The vast majority of organizations do not have the information (data) and computing power necessary to develop these solutions, nor do they need to. That said, this article focuses on AI approaches and tools that can adapt to the reality of most companies and organizations, with a particular focus on supporting decision-making.

But what is decision-making? Decision-making is a cognitive process that consists of evaluating alternatives and culminating in selecting an action based on the analysis of available information, the estimation of probabilities and the (attempt to) anticipate results. This process involves making choices, which can be influenced by several factors, depending on the problem and can be assisted and improved using data-based decision support systems[2].

The decision process applies to problems as simple as “Should I carry an umbrella today?” or as complex as “How should I allocate available resources to mitigate the impact of fires?” Regardless of the complexity of the problem, the AI ​​tools highlighted here have the potential to help with choice.

But first, a little history.

The term “Artificial Intelligence” is not new. Its origins date back to the 1950s, during a workshop in Dartmouth, the result of research work proposed by Alan Turing and John McCarthy, among other researchers. The term covers any computational approach that allows demonstrating forms of intelligence, such as understanding, reasoning or learning. At the end of the same decade, Alan Samuel proposed the term “Computer Learning”, a subfield of AI that studies the development of programs (or algorithms) capable of performing tasks without being explicitly programmed.

Over the years, several computational learning algorithms have been proposed. Among the most popular are Neural Networks, Decision Trees and Random Forest. When these algorithms interact with data sets (information collected over time), they can extract complex patterns and valuable knowledge, which, in many cases, is not identifiable by humans. This process is usually known as Data Mining.

The patterns identified by these algorithms make it possible to anticipate events or predict future values. With this information, the decision-making process is facilitated and, in some cases, even automated. Examples of its application include weather forecast systems and fraudulent transaction detection mechanisms in banks' computer systems. It should be noted that in this last example, none of these systems are explicitly programmed to know what fraud is. Instead, it is programmed to learn to distinguish between fraudulent and everyday transactions based on historical transaction data.

More recently, Deep Learning and the famous Large Language Models (LLMs) – used by ChatGPTs – were proposed, resulting in sophisticated and super-powerful evolutions of neural networks. Its applications are diverse, from recognizing objects in images to summarizing and generating high-quality texts. Despite their popularity, LLMs are not the pioneering algorithms for generating content such as images and texts – a skill in the Generative AI field. However, they are undoubtedly a milestone in research and revolution in this area.

And now, what is the reality of companies?

Collecting and storing information digitally is becoming increasingly simple and cheap, and many organizations already maintain a history of their operations and processes. This information (data) can include the number of sales, readings from machine sensors on the factory floor, product quality tests, and customer reviews.

Keeping this record is essential to understanding what happens within organizations and what can be improved to enhance their digitalization and increase competitiveness. To this end, AI tools can play a crucial and differentiating role. However, are most companies prepared for this step?

Well, collecting data without a well-defined purpose can result in wasted resources. As data is the basis of AI algorithms, their lack of quality could result in accurate or biased information and an adequate and useless decision support system. You can read more about the importance of data quantity and quality here. Still, when it comes to data analysis, many organizations can use AI tools, albeit with several limitations. The specialist's role often involves managing expectations and determining tangible and achievable goals and objectives.

Suppose the data collected regarding an operation (e.g., product quality testing) and the factors influencing it (e.g., the production process) are coherent. In that case, it may be possible to extract valuable insights about it, anticipate occurrences in future ones, and automate this operation.

However, obtaining the same insights about something else that is not represented in this data (e.g., another product) or about a random event (e.g., the EuroMillions key) is impossible. AI algorithms' performance strongly depends on the quality and quantity of data used in their creation.

When the algorithms in question are LLMs or other chatbot mechanisms, these problems become more relevant. In addition to the volume and quality of data required being considerably higher, so are the computational resources. In these cases, most companies need this data and the financial capacity to acquire the supercomputers that LLMs require. Still, it doesn't mean that Artificial Intelligence-based solutions are beyond your reach.

What are the applications of AI in organizations?

There are four main types of data analysis, with different degrees of complexity and consequent levels of added value for organizations (see Figure 1). A previous project in the manufacturing area described here will be used as an explanatory example.

 

 

  • Descriptive analysis: Uses historical data (from the past) to describe the activities or events it refers to. This type of analysis allows the identification of problems and inconsistencies in the data and its collection process. The project above allowed us to understand which materials were produced, the cadence, the environment on the factory floor, and the number of defects or anomalies during production.
  • Diagnostic analysis: This analysis uses historical data to identify the factors influencing a given event. It makes it possible to identify the lack of relationship between existing data and the event under analysis. Framing it with the previous example allowed us to understand which factors influenced production and their relationship with the event under analysis: the occurrence of defects.
  • Predictive analysis: Uses AI tools, particularly computational learning algorithms, to extract complex patterns and predict future occurrences, allowing a set of factors to be mapped to the expected result. In the project, for example, it was used to predict future defects and anomalies in production based on certain factors on the factory floor.
  • Prescriptive analysis: This is the most complex type of data analysis but with the most significant added value for the organization. It uses the results of previous studies and optimization mechanisms to recommend actions to be applied in the present, which allows for the optimization of one (or more) of the organization's objectives in the future. Returning to the previous example, this analysis was used to obtain production "recipes" that minimized the number of defects and anomalies. These recipes contained ideal values ​​for the factors influencing the event under analysis (e.g.: machine temperature).

Using the recent problem of fires as a complementary example, the first two types of analysis would make it possible to identify their past occurrence (or even in real time) and potential factors that gave rise to them, enabling a reactive attitude to their events.

On the other hand, the last two types of data analysis enhance the early detection of its occurrence and even recommend actions that minimize its occurrence or mitigate its impact, enhancing proactive attitudes. AI tools, particularly predictive computer learning algorithms and Modern Optimization algorithms, are crucial in supporting proactive attitudes. Due to their predictive capacity, the former would enhance the early detection of fires, allowing, for example, to estimate the resources needed for a day. The latter could allow, through their ability to optimize metrics, to reduce the area burned by a fire by recommending actions to be carried out before it happens (e.g. increasing the width of the forest cleaning area).

Obviously, fires are complex, meaning that their anticipation and/or mitigation can be much more difficult than described here. Note, however, that the purpose of the example is to present the advantages of AI in supporting decision-making.

It is not up to me, nor is this the purpose of this article, to decide whether or not companies should invest in “this” or “that” type of AI approach to integrate into their solutions. Instead, I intend to draw attention to the existence of alternative and, in some cases, more appropriate solutions.

 

[1] https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

[2] https://www.sciencedirect.com/topics/social-sciences/decision-making

 

Por:  Pedro Pereira

Senior Researcher in Machine Learning at the EPMQ Department of the CCG/ZGDV Institute