Logo

Can you trust AI with your money? Here's what the experts say

Experts share their views on the use of artificial intelligence in investing.

Published on February 11, 2026

pixabay

Masterstudente journalistiek aan de RUG, stagiair bij IO+, schrijft graag over de integratie van AI in het dagelijks leven

Artificial intelligence is no longer a futuristic concept on Wall Street, but executes millions of transactions every day. From hedge funds to retail platforms, algorithms analyze vast amounts of market data and trade in a fraction of a second. Proponents claim that AI brings efficiency and objectivity, while critics warn of the risks; it takes away essential human judgment. Can you trust AI with your money? We interviewed AI expert Andreea Sburlea, experienced trader Dave Brilman, and newcomer Marco van Kempen. “The final decision remains based on human judgment and context,” says Brilman.

Training an AI model

Machine learning is at the heart of modern commerce, detecting patterns in data and news. According to Andreea Sburlea, assistant professor at the University of Groningen and expert in the field of brain-computer interfaces, machine learning and neuroprostheses, just as with humans, “the more data [machine learning and deep learning models] have about a specific characteristic or a specific task, the better they become at finding it”. 

To train such a model, a programmer must enter training data. Think, for example, of old stock market information, on which the model must then make a decision for the next transaction. If it makes the right decisions, it is ready to work with live data. 

According to self-taught investor Dave Brilman, historical data and patterns are valuable, but not always decisive. "Selecting investments is also based on experience, assessing future developments and the ability to recognise new trends. What was valuable in the past may not be valuable in the future. An excessive focus on data and history therefore carries the risk of missing new developments or structural changes”.

“Whenever the world changes, the model needs to be informed about it,” says Sburlea. If this does not happen, it may provide incorrect information. New data takes time to enter, while missing information can lead to costly AI decisions. “It's a trade-off. In the long term, you have a model that can make you more money or save you more money, or you have a fast model that learns as it goes and can make more mistakes and lead to disaster if it is not updated.”

Account for bias 

AI excels at processing enormous data streams without emotional influence. However, because models such as Artificial Neural Networks (ANNs) are trained on specific data, they can still contain biases. Preventing human biases from influencing certain investments. An example of bias in an AI model comes from the University of Chicago.

Research showed that the language model used assessed American and African-American English differently. Although the AI was not trained with material containing overt racist comments, dialect and other forms of hidden racism in the training material were enough to assign African-American speakers to less prestigious jobs and convict them more often in hypothetical criminal cases.

Deliberately applied bias can help a model exploit recurring market patterns. But hidden bias can cause a model to score well in tests and then fail in practice, resulting in unexpected losses for traders.

A failsafe can be added to the AI model, indicating how confident it is in its predictions. ‘So even if your model indicates that it is 80% confident, it is important how confident you yourself are in its performance. And sometimes you are dealing with unbalanced data,’ says Sburlea. 

According to Sburlea, AI models should therefore primarily serve as a starting point or intermediate stage. ‘It should never be the end product in itself. It must be filtered through our own thinking rather than simply replacing our thought process.’ 

Human judgement remains indispensable in investing

Even investors with several years of experience, such as Marco van Kempen, agree with the experts: ‘I wouldn't use AI as absolute truth, but more as an aid,’ says Van Kempen.

One point of criticism from Van Kempen concerns AI suggestions to questions such as: ‘Which companies pay high dividends (profit distribution to shareholders)?’ He then checks the list of companies provided by AI against his own research. Van Kempen uses ChatGPT to support him in his trading decisions.

For Brilman, the biggest concern is that AI gives a false sense of security, while in reality there is no such thing as security in the financial market. ‘Markets remain uncertain by definition.’

According to Brilman, the following still applies: ‘Human judgement is particularly superior when it comes to sensing emotional market dynamics, for example when it is necessary to go against the market. Such moments are often based on sentiment, timing and experience, while AI mainly works on the basis of data and patterns.’

Experts therefore agree that AI should not be trusted blindly: ‘You always need a human overview,’ says Sburlea.

Understanding how AI works

AI models are black boxes, which means they contain information that is not transparent to users. This opacity makes trust, debugging and governance more difficult.  

However, there are models that are interpretable and explainable, and more research is being done into this possibility within machine learning and neural networks. Over the next five years, Saxion University of Applied Sciences will be conducting research to understand how AI works.