We fared a little better with QLens's Pinterest integration, though: taking a picture of a shirt prompted Pinterest suggestions that, in some cases at least, matched the shirt surprisingly well. Some suggestions were red and some had a similar texture to the real thing, so LG's algorithm was getting pretty close.

AI Cam was more immediately useful. There are eight scene mode presets -- portrait, animal, city/building, flower, sunrise, sunset, food and landscape -- and as the camera is meant to fire up whichever is appropriate for what it's looking at. While that identification process is happening, little ethereal keywords bubble up onto the screen to illustrate how the phone is "thinking" about the object. It's completely unnecessary, but just about everyone I've shown it to has enjoyed it -- it's a neat way to illustrate the algorithm in action and seeing those keywords slowly become more relevant is actually kind of fascinating.

More importantly, when objects in front of the camera -- like some flowers or donuts on a table -- matched one of the presets, the correct shooting mode kicked in just about every time. Mismatches and false positives are certainly possible, but in the few hours we've been testing the V30S ThinQ, we didn't encounter any. If anything, it just took a while for phone to decipher certain images. You don't even need to be using the dual camera for this trick to work; identifications made through the front-facing camera worked just as quickly. If LG could tune the algorithm's performance to the point where this feature could run by default, the company might really be onto something.