Thu Aug 24 2023

Debunking common misconceptions about AI in business

By Sara Yuste Fernandez Alonso

AI has been swiftly increasing in popularity over the past decade. Recently, the already popular buzzword invaded the daily life of anyone with access to the internet with the release of ChatGPT.

However, with great popularity comes great misunderstanding, and “all publicity is good publicity” might not apply when looking at a technology that can (and probably will) completely change the way we work. Many conversations are being held over the use, misuse, and future of AI, with many concerns over the risks and downsides of AI.

Different sectors typically hold several kinds of misconceptions about AI. We summarized the most common misconceptions about AI below, and asked our team of analytics translators to debunk each one of them:

AI means losing control

AI can help identify and exploit patterns in data, uncover outliers and process big amounts of data efficiently. AI should not substitute data environments, or the processes surrounding data processing. In all cases, AI should be used as a method to help in making decisions - but it should not imply any loss of control.

AI is unreliable because it is not always accurate

There is simply no model that will always be correct. Think of it like a human learning a task or skill; you might become quite good at a given task (for example, driving), yet you will sometimes find unexpected and new situations where you will need to improvise. Your improvised course of action might be correct, but sometimes you will be wrong.

A human cannot know every possible scenario beforehand, and therefore accepts the fact that there will be a certain human error margin. In a similar way, AI models can learn and adapt pretty well to many real-life problems, but they will occasionally give wrong predictions. This doesn’t make the model any less reliable, and it’s one of the reasons why you ideally want to keep at least one domain expert in the loop to interpret the output of the AI model, or to make the actual decisions.

To stay on the safe side, we need to limit our model's ability to generalize

Another common misconception is the need to overfit a model. If the model adapts itself too much to the learning data, to the point where it loses its ability to generalize, we call this “overfitting”.

Going back to the analogy with a human, imagine that you are learning how to drive. You want to be exposed to as many different scenarios as possible so that you are able to react when encountering a new situation. Learning how to drive in only one street to the point where you know it by heart won’t make you a good driver, as you probably wouldn’t know what to do once you drive out of that one street.

Similarly, when training an AI model, you want it to be able to generalize (or “improvise”) when it is exposed to data it’s never seen before. Suspiciously high accuracy during training is probably an indicator of overfitting, and with the exception of extremely straightforward and simple tasks, getting 100% accuracy in an experimental environment is not a desirable result.

AI will lead to unethical and biased decisions

This statement is not entirely false; AI models are very good at uncovering underlying patterns in data; therefore, if there is a bias in the data, the models will probably learn it too. This issue has triggered a whole research field within AI, known as ethical AI. There are many ways to tackle this issue, that we will not deep-dive into in the present blog.

In essence, the best way to go about this is to ensure you contact the right AI professionals that understand the technology well enough to identify and understand these biases. Working closely with subject matter experts (SMEs) can provide context and help steer the model in the right direction.

Achieve your organization's full potential with Xomnia, the leading AI consultancy in the Netherlands

AI operates in a vacuum and is inaccessible once implemented

AI is not an isolated tool. It is designed, trained and deployed by human experts, and it is used in tandem with the expertise of professionals in the field. Even after it has been deployed and integrated into the processes of an institution, it is possible to re-train, modify and adapt the models deployed.

Moreover, the results offered by an AI model are meant to serve as input for decision making and help humans perform their work more effectively and efficiently. The models will remain accessible and adaptable as long as competent AI professionals are maintaining the solutions implemented.

AI will substitute human expertise

Whilst AI is able to carry out routine and repetitive tasks, eliminating the need to do these manually, it can not entirely substitute human expertise. Far from it, in fact, as the expertise of subject matter experts will always be needed to provide the right context and make the final decisions. AI can do the “dirty work”, helping to make the subject matter experts’ daily work more efficient, enjoyable and interesting by eliminating the need to do some repetitive or complex tasks. In a nutshell, AI will revolutionize the role of the human (the same way that technology has done over the centuries), but will not substitute it.

AI is a magic wand that can solve all our problems on its own

AI models can perform a wide variety of very impressive tasks, so it’s not surprising that sometimes it’s perceived as almost “magic”. But in reality, it is a mix of science and engineering, and therefore needs work and resources before it can give results.

There are several types of challenges that AI can solve, but all of them require providing input at different stages of the implementation. In some cases, input is needed in the beginning, such as tagging data samples that the algorithm will learn from. In other cases, like customer segmentation, the algorithm will be able to operate without tagged samples, but expert input will be needed  to validate the results offered by the model.

In all cases, the effort required to create the needed input pales in comparison to the time and effort that will recurrently be saved once these models are integrated into the business operations of the company. So, if you are thinking about incorporating AI into your operations, keep in mind that the necessary human time and resource investment will be worth it!

We need to have AI in our company even if we don’t have a use case for it

AI is a great tool to solve many challenges, but it’s not the only solution out there. Oftentimes, the challenge at hand can easily be solved with other methods, which are faster, less costly, and guarantee a perfect answer in all cases. Simply put, AI is a means to an end, not an end itself.

It is crucial to have a clear use case defined before looking for a solution, rather than trying to force a specific technology into an existing use case. Demanding the use of AI just because of AI FOMO can lead to a big waste of time and money, inefficient solutions, or endless pet projects that will never see the light.

To properly explore the possibilities that AI can bring to your company, the best course of action is to contact skilled AI professionals that will help you assess your current situation and identify all possible use cases and opportunities to use AI or any other applicable data solutions. Taking the time to create an overview before attempting to dive head first will save you many headaches in the long run.

Xomnia can help you all the way - from setting data strategies to executing them. Get in touch for a consultation.