Engineer.ai, an Indian startup claiming to have built an artificial intelligence-assisted app development platform, is not in fact using AI to literally build apps, according to a report from The Wall Street Journal. Instead, the company, which has attracted nearly $30 million in funding from a SoftBank-owned firm and others, is reportedly relying mostly on human engineers, while using hype around AI to attract customers and investment that will last it until it can actually get its automation platform off the ground.

The company claims its AI tools are “human-assisted,” and that it provides a service that will help a customer make more than 80 percent of a mobile app from scratch in about an hour, according to claims Engineer.ai founder Sachin Dev Duggal, who also says his other title is “Chief Wizard,” made onstage at a conference last year. However, the WSJ reports that Engineer.ai does not use AI to assemble the code, and instead uses human engineers in India and elsewhere to put together the app.

The company was sued earlier this year by its chief business officer, Robert Holdheim, who claims the company is exaggerating its AI abilities to get the funding it needed to actually work on the technology. According to Holdheim, Duggal “was telling investors that Engineer.ai was 80% done with developing a product that, in truth, he had barely even begun to develop.”

Engineer.ai uses mostly conventional software and human engineers to make apps

When pressed on how the company actually employs machine learning and other AI training techniques, the company told the WSJ it uses natural language processing to estimate pricing and timelines of requested features, and that it relies on a “decision tree” to assign tasks to engineers. Neither of those really qualify as the type of modern AI that powers cutting-edge machine translation or image recognition, and it does not appear that any kind of AI agent or software of any kind is actually compiling code. Engineer.ai did not immediately respond to a request for comment.

Engineer.ai is not alone in allegedly talking up its AI capabilities. Funding for AI startups is growing fast, reaching $31 billion last year, according to PitchBook, and companies like Japanese conglomerate SoftBank have pledged to invest hundreds of billions in AI in the coming years. The number of companies which include the .ai top-level domain from the British territory Anguilla has doubled in the last few years, the WSJ reports. In other words, saying your company is building a traditional technology, like an app development platform, but tossing in AI is an easy way to get funding and attention in a saturated startup landscape increasingly squeezed by the efforts of giants like Facebook, Google, Uber, and others.

According to the UK investment firm MMC Ventures, startups with some type of AI component can attract as much as 50 percent more funding than other software companies, and the firm tells the WSJ that it suspects 40 percent or even more of those companies don’t use any form of real AI at all. Part of the issue is that AI can feel easy to get off the ground in a testing or preliminary format, but that it’s much harder to actually deploy at scale. Additionally, gaining the necessary training data to build capable AI agents can be extremely costly and time-consuming; companies like Facebook and Google have gigantic research organizations paying engineers top salaries for the purpose of developing better AI training techniques that may one day be used to build commercial products.

The revelations around Engineer.ai also reveal an uncomfortable truth about a lot of modern AI: it barely exists. Much like the moderation efforts of large-scale tech platforms like Facebook and YouTube — which use some AI, but also mostly armies of contractors overseas and domestically to review harmful and violent content for removal — a lot of AI technologies require people to guide them.

Numerous startups use AI to build hype without actually making use of the tech

The software must be trained to improve and be corrected when it gets stuff wrong, and that requires human eyes and ears to review, annotate data, and seed it back into the system where engineers can use it to fine-tune algorithms. This was especially true of the short-lived chatbot boom of a few years ago, when big names like Facebook and startups like Magic began employing scores of contractors hidden behind AI agents, like Facebook’s discontinued M, that would take the reins (or were the ones talking the whole time) when conversations became too complicated.

But the mystification of AI, and the ability to dupe both the public and even investors into believing a technology is more sophisticated than it really is, has since extended outward to entire companies and sectors.

Just look at the recent controversies over digital assistants and the human contractors hired to review the audio exchanges those assistants collect. Every one of the Big Five has admitted they use human employees to review these audio samples to help correct the assistants’ performance over time. That includes Apple, which has halted the practice and plans to offer an opt-out option after realizing it could undermine its pledge to user privacy.

Google has halted the practice in the EU for its Assistant, and Facebook halted its own program that used human-assisted AI to perform voice-to-text transcription for Messenger. But Google continues to employ the practice in the US and elsewhere, as does Amazon for Alexa and Microsoft for Cortana and Skype.

But the point remains: humans are required to help AI improve, even when companies are loath to admit it and aren’t always transparent with customers when another person is in fact involved in the process. In this case, a whole class of new startups appears to be using AI hype to try to build new technologies they may not be capable of — or even intent on — actually providing, both because it may be too difficult and because it’s easy to pretend otherwise. And these companies are getting more money for it as a result.