The company quickly backtracked, updating the terms to say it would only use imagery either for app improvement or purposes users have agreed to in advance. It also vowed to delete material from the servers if a user deleted it through the app. The Zao team will "fix the [privacy] issues that we didn't take into consideration," according to a post from the creators on Weibo.

The course correction might have come too late for some people, however. Users have trashed Zao's ratings on the App Store over the privacy issue, while the China E-Commerce Research Center called on the government to investigate after the software allegedly violated "certain laws and standards."

It's not clear there's enough wrong for officials to take action, especially after the policy change. At the least, this serves as a reminder to check privacy policies before racing to use apps that involve your likeness and other sensitive info. It also illustrates the thorny legal issues surrounding deepfakes. How much control (if any) can a company have over these images, for example? The concept is still novel enough that many people haven't yet considered the implications, and that could have unforeseen consequences.