In 2017 the research firm Deloitte asked 1,500 senior business leaders from across the U.S. if they were familiar with artificial intelligence (AI).

Only 17 percent said they were.

Granted, these were leaders from various disciplines, not just the financial space. But if we are to discuss the limits of AI in finance, the first thing to wonder about is the limits of leaders’ imaginations. Because truly, the sky would appear to be the limit for AI. And the sooner those in the C-suite realize it, the better.

Many are. AI, widely used in national security, healthcare, criminal justice, transportation and urban planning, has in the financial space taken on a prominent role in trade processing, predictive analytics and automated data collection — no surprise, given AI’s ability to sift through enormous amounts of data in the blink of an eye (and far more rapidly than is humanly possible).

That is just a preview of coming attractions, though. Brookings quoted a PriceWaterhouseCoopers estimate that by 2030 the global GDP could increase some 14 percent to $15.7 trillion as a result of AI technologies. That includes $7 trillion in China, where AI has become a national initiative; the government’s stated goal is to sink $150 million into such technology.

The expected growth in North America ($3.7 trillion), while significant, pales in comparison.

It should come as no surprise, then, that the AI race has been likened to the space race of the 1960s, or the race to build the first atomic bomb some 20 years before. But in the current competition, those in the financial realm are just getting out of the starting blocks.

While investment in financial AI is on the rise in the U.S. — it tripled, for instance, between 2013 and 2014, to $12.2 billion — there are still refinements to be made. Analysts at Goldman Sachs and Morgan Stanley, for instance, say AI applications are best when limited to fraud detection and error reduction, and when used as virtual assistants.

Beyond that, there are bugs that need to be worked out. AI algorithms are, for example, incapable of fielding questions that require deeper understanding, leaving the experts at Goldman Sachs and Morgan Stanley to wonder if they were merely memorizing the data, as opposed to really learning anything about it.

Further, they concluded that artificial general intelligence (AGI) — i.e., the point at which machines’ understanding outpaces that of humans — is far from imminent.

Charles Elkan, Goldman’s managing director and global head of machine learning, told the Wall Street Journal that while super-intelligence is possible, AI’s current algorithms “are not going to scale to human intelligence, let alone super-intelligence.”

It’s important to note, however, that the technology continues to evolve. Such things as chatbots (for customer service) and biometrics (for security) are increasingly prevalent, as are robo-advisors. Experts also foresee a day when AI is capable of understanding social media and news trends.

The New Yorker reported earlier this year that there are those who believe machines will achieve AGI by 2047, putting humans in a position where they will at the very least have to reinvent themselves. There is ample evidence, in fact, that that process has already begun. In 2000, Goldman employed over 600 human traders. Now there are two, along with 200 computer engineers to manage the automated trading platforms.

In other words, AI’s limits are not limitless. The technology is evolving, and those in the C-Suites must continue to do the same.