There was a sense of inevitability as Hillary Clinton left her home in Chappaqua, New York on Tuesday, November 8 to travel to midtown Manhattan to deliver what almost every pollster, commentator, and expert believed would be her victory speech. The announcement of Donald Trump’s victory the next morning surprised them all, leaving professional election analysts off-guard and wondering how the polling data could have been so wrong.

I myself cautioned this over-confident projection, and if you looked some polls did too. In addition, one notable model succeeded in predicting the election results where others failed. Called EagleAi, this artificial intelligence tool developed by Havas Cognitive for British ITV News was programmed to analyze and understand various data on an unprecedented scale.

Havas fed billions of data points to the artificial intelligence dating back to the moment Clinton and Trump were selected as their party’s candidate at their respective conventions. Like their colleagues in other outlets, Havas’ experts did not believe EagleAi’s continuous predictions of a Trump victory. In fact, they went back to not only double-check the data points they entered, but also the algorithms that they designed to study the disparate information.

Their programming turned out to be sound. EagleAi learned to understand sentiment, tone, emotion, and intention in order to predict voter behavior. As it developed, it grew sophisticated enough to spot patterns and connections just like the human brain—except that it has a remarkable ability to process at speeds and volumes that are far beyond what even the most gifted of minds can handle. According to Havas, the AI is actually the equivalent of 2,000 human brains (in this case, in the form of data researchers).

EagleAi analyzed all three presidential debates, 15 million articles, countless speeches, and hundreds of political feeds consisting of tweets, other social media posts, and millions of additional sources. Based on these data, it predicted that Trump’s victory was not only possible but was the most likely outcome.

Using its varied sources, the AI used the candidates’ personality indexes to analyze their character and the strength of their messages. It concluded that Trump’s personality index was more “agreeable” than Clinton’s. According to the index, a candidate is “agreeable” if he or she is perceived as considerate and warm.

From this definition, we can draw several conclusions about the candidates and the electorate. First, Clinton may have been highly prepared but did not give off enough warmth and likeability to sway voters. Second, it suggests that voters connected with Trump as a person who understands them and wants to make their lives better. Even though some may have perceived Trump’s language as negative during the campaign, EagleAi concluded that the high volume of messages like “make America great again” and that Trump “loves this country” resonated with voters because of the frequency of their distribution throughout traditional and social media spheres (like Trump’s Twitter). In other words, the more people heard or read them, the better their perception of and connection to Trump became.

Even though EagleAi was able to track human preferences in a way that actual people could not, a noteworthy feature of the program is that, even though it is an artificial intelligence, it has no preferences of its own. This, to me, is perhaps the most salient point we should pay attention to. It supports the likelihood that it’s not polls that got things wrong, but the pundits who let their confirmation bias get in the way of an accurate prediction.

EagleAi drew its conclusions based on the ones and zeros of facts and data points. Perhaps pundits should take a leaf out of its book–or a wire out of its software–to keep their projections clear and unbiased.