In this tutorial, you’ll learn how to make an app like Pokemon Go. You’ll learn how to use augmented reality and location services to get gamers outdoors!

In this tutorial on how to make an app like Pokemon Go, you will create your own augmented reality monster-hunting game. The game has a map to show both your location and your enemies’ locations, a 3D SceneKit view to show a live preview of the back camera, and 3D models of enemies.

If you’re new to working with augmented reality, take the time to read through our introductory location-based augmented reality tutorial before you start. It’s not a full pre-requisite to this tutorial to show you how to make an app like Pokemon Go, but it contains lots of valuable information about math and augmented reality that won’t be covered here.

Getting Started

Download the starter project for this tutorial on how to make an app like Pokemon Go. The project contains two view controllers along with the folder art.scnassets, which contains all the 3D models and textures you’ll need.

ViewController.swift contains a UIViewController subclass you’ll use to show the AR part of the app. MapViewController will be used to show a map with your current location and some enemies around you. Basic things like constraints and outlets are already done for you, so you can concentrate on the important parts of this tutorial on how to make an app like Pokemon Go.

Adding Enemies To The Map

Before you can go out and fight enemies, you’ll need to know where they are. Create a new Swift file and name it ARItem.swift.

Add the following code after the import Foundation line in ARItem.swift:

import CoreLocation struct ARItem { let itemDescription: String let location: CLLocation }

An ARItem has a description and a location so you know the kind of enemy — and where it’s lying in wait for you.

Open MapViewController.swift and add an import for CoreLocation along with a property to store your targets:

var targets = [ARItem]()

Now add the following method:

func setupLocations() { let firstTarget = ARItem(itemDescription: "wolf", location: CLLocation(latitude: 0, longitude: 0)) targets.append(firstTarget) let secondTarget = ARItem(itemDescription: "wolf", location: CLLocation(latitude: 0, longitude: 0)) targets.append(secondTarget) let thirdTarget = ARItem(itemDescription: "dragon", location: CLLocation(latitude: 0, longitude: 0)) targets.append(thirdTarget) }

Here you create three enemies with hard-coded locations and descriptions. You’ll have to replace the (0, 0) coordinates with something closer to your physical location.

There are many ways to find some locations. For example, you could create some random locations around your current position, use the PlacesLoader from our original Augmented Reality tutorial, or even use Xcode to fake your current position. However, you don’t want your random locations to be in your neighbor’s living room. Awkward.

To make things simple, you can use Google Maps. Open https://www.google.com/maps/ and search your current location. If you click on the map, a marker appears along with a small popup at the bottom.

Inside this popup you’ll see values for both latitude and longitude. I suggest that you create some hard-coded locations near you or on your the street, so you don’t have to call your neighbor telling him that you want to fight a dragon in his bedroom.

Choose three locations and replace the zeros from the code above with the values you found.

Pin Enemies On The Map

Now that you have locations for your enemies, it’s time to show them on a MapView . Add a new Swift file and save it as MapAnnotation.swift. Inside the file add the following code:

import MapKit class MapAnnotation: NSObject, MKAnnotation { //1 let coordinate: CLLocationCoordinate2D let title: String? //2 let item: ARItem //3 init(location: CLLocationCoordinate2D, item: ARItem) { self.coordinate = location self.item = item self.title = item.itemDescription super.init() } }

This creates a class MapAnnotation that implements the MKAnnotation protocol. In more detail:

The protocol requires a variable coordinate and an optional title . Here you store the ARItem that belongs to the annotation. With the init method you can populate all variables.

Now head back to MapViewController.swift. Add the following to the bottom of setupLocations() :

for item in targets { let annotation = MapAnnotation(location: item.location.coordinate, item: item) self.mapView.addAnnotation(annotation) }

In this loop you iterate through all items inside the targets array and add an annotation for each target.

Now, at the end of viewDidLoad() , call setupLocations() :

override func viewDidLoad() { super.viewDidLoad() mapView.userTrackingMode = MKUserTrackingMode.followWithHeading setupLocations() }

Before you can use the location, you’ll have to ask for permission. Add the following new property to MapViewController :

let locationManager = CLLocationManager()

At the end of viewDidLoad() , add the code to ask for permissions if needed:

if CLLocationManager.authorizationStatus() == .notDetermined { locationManager.requestWhenInUseAuthorization() }

Note: If you forget to add this permission request, the map view will fail to locate the user. Unfortunately there is no error message to tell your this. Therefore every time you work with location services and you can’t get the location, this will be a good starting point for searching for the source of the error.

Build and run your project; after a short time the map will zoom to your current position and show some red markers at your enemies’ locations.

Adding Augmented Reality

Right now you have a nice app, but you still need to add the augmented reality bits. In the next few sections, you’ll add a live preview of the camera and add a simple cube as a placeholder for an enemy.

First you need to track the user location. Add the following property to MapViewController :

var userLocation: CLLocation?

Then add the following extension at the bottom:

extension MapViewController: MKMapViewDelegate { func mapView(_ mapView: MKMapView, didUpdate userLocation: MKUserLocation) { self.userLocation = userLocation.location } }

You call this method each time MapView updates the location of the device; you simply store the location to use in another method.

Add the following delegate method to the extension:

func mapView(_ mapView: MKMapView, didSelect view: MKAnnotationView) { //1 let coordinate = view.annotation!.coordinate //2 if let userCoordinate = userLocation { //3 if userCoordinate.distance(from: CLLocation(latitude: coordinate.latitude, longitude: coordinate.longitude)) < 50 { //4 let storyboard = UIStoryboard(name: "Main", bundle: nil) if let viewController = storyboard.instantiateViewController(withIdentifier: "ARViewController") as? ViewController { // more code later //5 if let mapAnnotation = view.annotation as? MapAnnotation { //6 self.present(viewController, animated: true, completion: nil) } } } } }

If a user taps an enemy that’s less than 50 meters away you'll show the camera preview as follows:

Here you get the coordinate of the selected annotation. Make sure the optional userLocation is populated. Make sure the tapped item is within range of the users location. Instantiate an instance of ARViewController from the storyboard. This line checks if the tapped annotation is a MapAnnotation . Finally, you present viewController .

Build and run the project and tap an annotation near your current location. You'll see a white view appear:

Adding the Camera Preview

Open ViewController.swift, and import AVFoundation after the import of SceneKit

import UIKit import SceneKit import AVFoundation class ViewController: UIViewController { ...

and add the following properties to store an AVCaptureSession and an AVCaptureVideoPreviewLayer :

var cameraSession: AVCaptureSession? var cameraLayer: AVCaptureVideoPreviewLayer?

You use a capture session to connect a video input, such as the camera, and an output, such as the preview layer.

Now add the following method:

func createCaptureSession() -> (session: AVCaptureSession?, error: NSError?) { //1 var error: NSError? var captureSession: AVCaptureSession? //2 let backVideoDevice = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: .back) //3 if backVideoDevice != nil { var videoInput: AVCaptureDeviceInput! do { videoInput = try AVCaptureDeviceInput(device: backVideoDevice) } catch let error1 as NSError { error = error1 videoInput = nil } //4 if error == nil { captureSession = AVCaptureSession() //5 if captureSession!.canAddInput(videoInput) { captureSession!.addInput(videoInput) } else { error = NSError(domain: "", code: 0, userInfo: ["description": "Error adding video input."]) } } else { error = NSError(domain: "", code: 1, userInfo: ["description": "Error creating capture device input."]) } } else { error = NSError(domain: "", code: 2, userInfo: ["description": "Back video device not found."]) } //6 return (session: captureSession, error: error) }

Here’s what the method above does:

Create some variables for the return value of the method. Get the rear camera of the device. If the camera exists, get it's input. Create an instance of AVCaptureSession . Add the video device as an input. Return a tuple that contains either the captureSession or an error.

Now that you have the input from the camera, you can load it into your view:

func loadCamera() { //1 let captureSessionResult = createCaptureSession() //2 guard captureSessionResult.error == nil, let session = captureSessionResult.session else { print("Error creating capture session.") return } //3 self.cameraSession = session //4 if let cameraLayer = AVCaptureVideoPreviewLayer(session: self.cameraSession) { cameraLayer.videoGravity = AVLayerVideoGravityResizeAspectFill cameraLayer.frame = self.view.bounds //5 self.view.layer.insertSublayer(cameraLayer, at: 0) self.cameraLayer = cameraLayer } }

Taking the above method step-by-step:

First, you call the method you created above to get a capture session. If there was an error, or captureSession is nil , you return. Bye-bye augmented reality. If everything was fine, you store the capture session in cameraSession . This line tries to create a video preview layer; if successful, it sets videoGravity and sets the frame of the layer to the views bounds. This gives you a fullscreen preview. Finally, you add the layer as a sublayer and store it in cameraLayer .

Now add the following to viewDidLoad() :

loadCamera() self.cameraSession?.startRunning()

Really just two things going on here: first you call all the glorious code you just wrote, then start grabbing frames from the camera. The frames are displayed automatically on the preview layer.

Build and run your project, then tap a location near you and enjoy the new camera preview:

Adding a Cube

A preview is nice, but it’s not really augmented reality — yet. In this section, you’ll add a simple cube for an enemy and move it depending on the user’s location and heading.

This small game has two kind of enemies: wolves and dragons. Therefore, you need to know what kind of enemy you’re facing and where to display it.

Add the following property to ViewController (this will help you store information about the enemies in a bit):

var target: ARItem!

Now open MapViewController.swift, find mapView(_:, didSelect:) and change the last if statement to look like the following:

if let mapAnnotation = view.annotation as? MapAnnotation { //1 viewController.target = mapAnnotation.item self.present(viewController, animated: true, completion: nil) }

Before you present viewController you store a reference to the ARItem of the tapped annotation. So viewController knows what kind of enemy you're facing.

Now ViewController has everything it needs to know about the target.

Open ARItem.swift and import SceneKit .

import Foundation import SceneKit struct ARItem { ... }

Next, add the following property to store a SCNNode for an item:

var itemNode: SCNNode?

Be sure to define this property after the ARItem structure’s existing properties, since you will be relying on the implicit initializer to define arguments in the same order.

Now Xcode displays an error in MapViewController.swift. To fix that, open the file and scroll to setupLocations() .

Change the lines Xcode marked with a red dot on the left of the editor pane.



In each line, you’ll add the missing itemNode argument as a nil value.

As an example, change the line below:

let firstTarget = ARItem(itemDescription: "wolf", location: CLLocation(latitude: 50.5184, longitude: 8.3902))

...to the following:

let firstTarget = ARItem(itemDescription: "wolf", location: CLLocation(latitude: 50.5184, longitude: 8.3902), itemNode: nil)

You know the type of enemy to display, and what it’s position is, but you don't yet know the direction of the device.

Open ViewController.swift and import CoreLocation , your imports should look like this now.

import UIKit import SceneKit import AVFoundation import CoreLocation

Next, add the following properties:

//1 var locationManager = CLLocationManager() var heading: Double = 0 var userLocation = CLLocation() //2 let scene = SCNScene() let cameraNode = SCNNode() let targetNode = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0))

Here’s the play-by-play:

You use a CLLocationManager to receive the heading the device is looking. Heading is measured in degrees from either true north or the magnetic north pole. This creates an empty SCNScene and SCNNode . targetNode is a SCNNode containing a cube.

Add the following to the bottom of viewDidLoad() :

//1 self.locationManager.delegate = self //2 self.locationManager.startUpdatingHeading() //3 sceneView.scene = scene cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(x: 0, y: 0, z: 10) scene.rootNode.addChildNode(cameraNode)

This is fairly straightforward code:

This sets ViewController as the delegate for the CLLocationManager . After this call, you’ll have the heading information. By default, the delegate is informed when the heading changes more than 1 degree. This is some setup code for the SCNView . It creates an empty scene and adds a camera.

To adopt the CLLocationManagerDelegate protocol, add the following extension to ViewController

extension ViewController: CLLocationManagerDelegate { func locationManager(_ manager: CLLocationManager, didUpdateHeading newHeading: CLHeading) { //1 self.heading = fmod(newHeading.trueHeading, 360.0) repositionTarget() } }

CLLocationManager calls this delegate method each time new heading information is available. fmod is the modulo function for double values, and assures that heading is in the range of 0 to 359.

Now add repostionTarget() to ViewController.swift, but inside the normal implementation and not inside the CLLocationManagerDelegate extension:

func repositionTarget() { //1 let heading = getHeadingForDirectionFromCoordinate(from: userLocation, to: target.location) //2 let delta = heading - self.heading if delta < -15.0 { leftIndicator.isHidden = false rightIndicator.isHidden = true } else if delta > 15 { leftIndicator.isHidden = true rightIndicator.isHidden = false } else { leftIndicator.isHidden = true rightIndicator.isHidden = true } //3 let distance = userLocation.distance(from: target.location) //4 if let node = target.itemNode { //5 if node.parent == nil { node.position = SCNVector3(x: Float(delta), y: 0, z: Float(-distance)) scene.rootNode.addChildNode(node) } else { //6 node.removeAllActions() node.runAction(SCNAction.move(to: SCNVector3(x: Float(delta), y: 0, z: Float(-distance)), duration: 0.2)) } } }

Here’s what each commented section does:

You will implement this method in the next step, but this basically calculates the heading from the current location to the target. Then you calculate a delta value of the device’s current heading and the location’s heading. If the delta is less than -15, display the left indicator label. If it is greater than 15, display the right indicator label. If it’s between -15 and 15, hide both as the the enemy should be onscreen. Here you get the distance from the device’s position to the enemy. If the item has a node assigned... and the node has no parent, you set the position using the distance and add the node to the scene. Otherwise, you remove all actions and create a new action.

If you are familiar with SceneKit or SpriteKit the last line should be no problem. If not, here is a more detailed explanation.

SCNAction.move(to:, duration:) creates an action that moves a node to the given position in the given duration. runAction(_:) is a method of SCNNode and executes an action. You can also create groups and/or sequences of actions. Our book 3D Apple Games by Tutorials is a good resource for learning more.

Now to implement the missing method. Add the following methods to ViewController.swift:

func radiansToDegrees(_ radians: Double) -> Double { return (radians) * (180.0 / M_PI) } func degreesToRadians(_ degrees: Double) -> Double { return (degrees) * (M_PI / 180.0) } func getHeadingForDirectionFromCoordinate(from: CLLocation, to: CLLocation) -> Double { //1 let fLat = degreesToRadians(from.coordinate.latitude) let fLng = degreesToRadians(from.coordinate.longitude) let tLat = degreesToRadians(to.coordinate.latitude) let tLng = degreesToRadians(to.coordinate.longitude) //2 let degree = radiansToDegrees(atan2(sin(tLng-fLng)*cos(tLat), cos(fLat)*sin(tLat)-sin(fLat)*cos(tLat)*cos(tLng-fLng))) //3 if degree >= 0 { return degree } else { return degree + 360 } }

radiansToDegrees(_:) and degreesToRadians(_:) are simply two helper methods to convert values between radians and degrees.

Here’s what’s going on in getHeadingForDirectionFromCoordinate(from:to:) :

First, you convert all values for latitude and longitude to radians. With these values, you calculate the heading and convert it back to degrees. If the value is negative, normalize it by adding 360 degrees. This is no problem, since -90 degrees is the same as 270 degree.

There are two small steps left before you can see your work in action.

First, you'll need to pass the user's location along to viewController . Open MapViewController.swift and find the last if statement inside mapView(_:, didSelect:) and add the following line right before you present the view controller;

viewController.userLocation = mapView.userLocation.location!

Now add the following method to ViewController.swift:

func setupTarget() { targetNode.name = "enemy" self.target.itemNode = targetNode }

Here you simply give targetNode a name and assign it to the target. Now you can call this method at the end of viewDidLoad() , just after you add the camera node:

scene.rootNode.addChildNode(cameraNode) setupTarget()

Build and run your project; watch your not-exactly-menacing cube move around:

Polishing

Using primitives like cubes and spheres is an easy way to build your app without spending too much time mucking around with 3D models — but 3D models look _soo_ much nicer. In this section, you’ll add some polish to the game by adding 3D models for enemies and the ability to throw fireballs.

Open the art.scnassets folder to see two .dae files. These files contain the models for the enemies: one for a wolf, and one for a dragon.

The next step is to change setupTarget() inside ViewController.swift to load one of these models and assign it to the target’s itemNode property.

Replace the contents of setupTarget() with the following:

func setupTarget() { //1 let scene = SCNScene(named: "art.scnassets/\(target.itemDescription).dae") //2 let enemy = scene?.rootNode.childNode(withName: target.itemDescription, recursively: true) //3 if target.itemDescription == "dragon" { enemy?.position = SCNVector3(x: 0, y: -15, z: 0) } else { enemy?.position = SCNVector3(x: 0, y: 0, z: 0) } //4 let node = SCNNode() node.addChildNode(enemy!) node.name = "enemy" self.target.itemNode = node }

Here’s what’s going on above:

First you load the model into a scene. The target’s itemDescription has the same name as the .dae file. Next you traverse the scene to find a node with the name of itemDescription . There’s only one node with this name, which also happens to be the root node of the model. Then you adjust the position so that both models appear at the same place. If you get your models from the same designer, you might not need this step. However, I used models from two different designers: the wolf is from I found the wolf from 3dwarehouse.sketchup.com and the dragon from https://clara.io. Finally, you add the model to an empty node assign it to the itemNode property of the current target. This is small trick to make the touch handling in the next section a little easier.

Build and run your project; you’ll see a 3D model of a wolf that looks far more menacing than your lowly cube!

In fact, the wolf looks scary enough you might be tempted to run away, but as a brave hero retreat is not an option! Next you’ll add some fireballs so you can fight him off before you become lunch for a wolf pack.

The touch ended event is a good time to throw a fireball, so add the following method to ViewController.swift:

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) { //1 let touch = touches.first! let location = touch.location(in: sceneView) //2 let hitResult = sceneView.hitTest(location, options: nil) //3 let fireBall = SCNParticleSystem(named: "Fireball.scnp", inDirectory: nil) //4 let emitterNode = SCNNode() emitterNode.position = SCNVector3(x: 0, y: -5, z: 10) emitterNode.addParticleSystem(fireBall!) scene.rootNode.addChildNode(emitterNode) //5 if hitResult.first != nil { //6 target.itemNode?.runAction(SCNAction.sequence([SCNAction.wait(duration: 0.5), SCNAction.removeFromParentNode(), SCNAction.hide()])) let moveAction = SCNAction.move(to: target.itemNode!.position, duration: 0.5) emitterNode.runAction(moveAction) } else { //7 emitterNode.runAction(SCNAction.move(to: SCNVector3(x: 0, y: 0, z: -30), duration: 0.5)) } }

Here’s how the fireball logic works:

You convert the touch to a coordinate inside the scene. hitTest(_, options:) sends a ray trace to the given position and returns an array of SCNHitTestResult for every node that is on the line of the ray trace. This loads the particle system for the fireball from a SceneKit particle file. You then load the particle system to an empty node and place it at the bottom, outside the screen. This makes it look like the fireball is coming from the player’s position. If you detect a hit... ...you wait for a short period then remove the itemNode containing the enemy. You also move the emitter node to the enemy’s position at the same time. If you didn’t score a hit, the fireball simply moves to a fixed position.

Build and run your project, and make that wolf go up in flames!

Finishing Touches

To finish your game, you’ll need to remove the enemy from the list, close the augmented reality view and go back to the map to find the next enemy.

Removing the enemy from the list must be done in MapViewController , since the list of enemies lives there. To do this, you will add a delegate protocol with only one method called when a target is hit.

Add the following protocol inside ViewController.swift, just above the class declaration:

protocol ARControllerDelegate { func viewController(controller: ViewController, tappedTarget: ARItem) }

Also add the following property to ViewController :

var delegate: ARControllerDelegate?

The method in the delegate protocol tells the delegate that there was a hit; the delegate can then decide what to do next.

Still in ViewController.swift, find touchesEnded(_:with:) and change the block of code for the condition of the if statement as follows:

if hitResult.first != nil { target.itemNode?.runAction(SCNAction.sequence([SCNAction.wait(duration: 0.5), SCNAction.removeFromParentNode(), SCNAction.hide()])) //1 let sequence = SCNAction.sequence( [SCNAction.move(to: target.itemNode!.position, duration: 0.5), //2 SCNAction.wait(duration: 3.5), //3 SCNAction.run({_ in self.delegate?.viewController(controller: self, tappedTarget: self.target) })]) emitterNode.runAction(sequence) } else { ... }

Here’s what your changes mean:

You change the action of the emitter node to a sequence, the move action stays the same. After the emitter moves, pause for 3.5 seconds. Then inform the delegate that a target was hit.

Open MapViewController.swift and add the following property to store the selected annotation:

var selectedAnnotation: MKAnnotation?

You’ll use this in a moment to remove it from the MapView .

Now find mapView(_:, didSelect:) and make the following changes to the conditional binding and block (i.e., the if let ) which instantiates the ViewController :

if let viewController = storyboard.instantiateViewController(withIdentifier: "ARViewController") as? ViewController { //1 viewController.delegate = self if let mapAnnotation = view.annotation as? MapAnnotation { viewController.target = mapAnnotation.item viewController.userLocation = mapView.userLocation.location! //2 selectedAnnotation = view.annotation self.present(viewController, animated: true, completion: nil) } }

Quite briefly:

This sets the delegate of ViewController to MapViewController . Then you save the selected annotation.

Below the MKMapViewDelegate extension add the following:

extension MapViewController: ARControllerDelegate { func viewController(controller: ViewController, tappedTarget: ARItem) { //1 self.dismiss(animated: true, completion: nil) //2 let index = self.targets.index(where: {$0.itemDescription == tappedTarget.itemDescription}) self.targets.remove(at: index!) if selectedAnnotation != nil { //3 mapView.removeAnnotation(selectedAnnotation!) } } }

Taking each commented section in turn:

First you dismiss the augmented reality view. Then you remove the target from the target list. Finally you remove the annotation from the map.

Build and run to see your finished app:

Where to Go From Here?

Here is the final project, with all code from above.

If you want to learn more about the parts that make this app possible, have a look at the following tutorials:

I hope you enjoyed this tutorial on how to make an app like Pokemon Go. If you have any comments or questions, please join the forum discussion below!