The iPad continues to be a source of much debate - not only in the iOS developer community, but in the tech industry at large as well. Will the iPad replace the Mac, can you get real work done on iOS, and should the iPad be treated as a proper computer?

Whether or not you believe that the iPad is the future of computing, it does bring a ton of interesting new features and capabilities to the table - especially with the latest release of the Pro version. This week, let’s take a look at how we as third-party developers can take advantage of some of those capabilities to build interesting new features for our iOS apps.

Perhaps the most interesting capability of the iPad Pro and the 2018 version of the base model, is the addition of the Apple Pencil. While at first the pencil may seem like either a glorified stylus, or like a tool that’ll only ever be useful for drawing - it can become so much more, given a UIGestureRecognizer and a bit of imagination.

By default, all pencil interactions are treated as normal touches. The pencil can be used to scroll, tap, drag and to perform any other kind of single-finger touch input. But the cool thing is that we can also easily differentiate between touch and pencil input in code, giving us the power to essentially add a whole new level of interaction to our iPad apps.

Let’s take a look at an example, in which we’re building a CanvasViewController that’ll let the user use the Apple Pencil to draw lines on the screen. To enable drawing and scrolling at the same time, we want to only use pencil interactions for drawing, and only touch interactions for scrolling. To do that, let’s start by setting up a UIPanGestureRecognizer that only recognizes pencil inputs, like this:

class CanvasViewController: UIViewController { override func viewDidLoad() { super . viewDidLoad () let recognizer = UIPanGestureRecognizer ( target: self , action: #selector (handlePencilDrag) ) recognizer. allowedTouchTypes = [ NSNumber (value: UITouch . TouchType . pencil . rawValue ) ] view. addGestureRecognizer (recognizer) } }

Since we’re using a normal UIPanGestureRecognizer to detect pencil input, we can use the same methods as when dealing with normal touches to get information about the interaction - such as the current location and velocity within our view. We’ll then pass that information along to a draw method, in which we perform the actual drawing:

private extension CanvasViewController { @objc func handlePencilDrag(using recognizer: UIPanGestureRecognizer ) { let location = recognizer. location (in: view) let translation = recognizer. translation (in: view) let velocity = recognizer. velocity (in: view) draw (at: location, translation: translation, velocity: velocity) } }

With the above in place we can now use the pencil to draw, but since the rest of the system treats it as touch input, it will still also trigger other events at the same time - such as scrolling. To enable our users to focus on drawing with the pencil, let’s fix that, again by using the allowedTouchTypes API - but this time on all gesture recognizers attached to our scroll view:

for recognizer in scrollView. gestureRecognizers ! { recognizer. allowedTouchTypes = [ NSNumber (value: UITouch . TouchType . direct . rawValue ) ] }

We can now handle touch events and pencil events separately, and while this probably isn’t something we should do for all of our UI (since some users may prefer to also navigate our app using the pencil instead of their finger), for situations when we want to differentiate between the two kinds of input the allowedTouchTypes API comes very much in handy.

A new feature of the 2018 iPad Pro version of the Apple Pencil is that it now supports a double-tap gesture on the side of the pencil itself. While this is a very simple gesture, it can let us do some interesting things, like providing quick access to common actions or to let the user cycle through various tools in our app.

Let’s extend our CanvasViewController from before to support this new pencil interaction. To do that, we’ll add a method called setupPencilInteractions that we’ll call from viewDidLoad - that simply creates a UIPencilInteraction , sets our view controller as its delegate, and adds it to our view - like this:

extension CanvasViewController { func setupPencilInteractions() { let interaction = UIPencilInteraction () interaction. delegate = self view. addInteraction (interaction) } }

While we’re technically free to respond to the above interaction in any way we see fit, there are some things to keep in mind in order for our app to be a good citizen. In the Settings app, users can pick how they want pencil interactions to behave by default - and while the system itself doesn’t enforce this setting in any way, it’s probably a good idea to respect it as much as we can.

Let’s make our view controller conform to UIPencilInteractionDelegate and implement its only method - and in that method check what the user’s UIPencilPreferredAction is, and take the appropriate action:

extension CanvasViewController : UIPencilInteractionDelegate { func pencilInteractionDidTap( _ interaction: UIPencilInteraction ) { switch UIPencilInteraction . preferredTapAction { case . ignore : break case . showColorPalette : showColorPalette () case . switchEraser : if tools. current == . eraser { selectTool (tools. previous ) } else { selectTool (. eraser ) } case . switchPrevious : selectTool (tools. previous ) } } }

When looking at the UIPencilPreferredAction API, it’s very clear that it was designed primarily with drawing applications in mind - so if we’re using the pencil in any other kind of situation, we might need to either try to map the available enum cases to an equivalent action in our app, or to simply include our own custom setting for what the double-tap interaction should map to.

For example, in a document editing app we might use the above interaction to do copy & paste, or in a game that uses the pencil for input we might let the user perform a common move using a quick double-tap.

A little-known fact about iOS is that it actually has fully supported external displays for years - all the way since iOS 3.2! But up until now you either had to use a Lightning-to-HDMI adapter or use AirPlay (with significant delays) to utilize it, and as a result, very few apps used this capability in any meaningful way.

This might all change with the new iPad Pro models which, thanks to their standard USB-C port, are able to natively connect to many types of displays - using resolutions up to 5K at 60 Hz.

While external displays simply mirror the screen of the connected device by default, an app can actually take control of such a display and essentially treat it as an additional UIScreen that can be used to render anything.

To do that, all we have to do is to use NotificationCenter to observe UIScreen.didConnectNotification . Once an external display is connected, that notification will be posted, and - using the notification’s object - we’ll get access to the newly connected UIScreen :

class ExternalDisplayController { func activate() { NotificationCenter . default . addObserver ( forName: UIScreen . didConnectNotification , object: nil , queue: . main ) { [ weak self ] notification in let screen = notification. object as ! UIScreen self ?. setup (screen) } } }

To setup the external display for rendering we’ll need to create a new UIWindow and both assign the passed UIScreen to its screen property, as well as making it the key window for that new screen - just like when creating an app’s main window when not using a storyboard. We also have to retain the new window, since the screen will only keep a weak reference to it, giving us an implementation looking something like this:

private extension ExternalDisplayController { func setup( _ screen: UIScreen ) { let window = UIWindow (frame: screen. bounds ) window. screen = screen window. makeKeyAndVisible () self . window = window setup (window) } }

Now, to actually use our new UIWindow , let’s say that we’re building some form of markup editor - and we want to use the external display to show a preview of what the markup will look like when rendered. Since we’re just dealing with standard UIKit classes, we can just create a UIViewController that’ll show our preview and assign it as our new window’s root view controller, like this:

private extension ExternalDisplayController { func setup( _ window: UIWindow ) { let previewViewController = PreviewViewController () editorViewController. previewer = previewViewController window. rootViewController = previewViewController } }

We’re now able to render anything we want on both the iPad screen and the external display at the same time - and since everything is running natively within our own app - we can easily synchronize the rendering between the two displays. Pretty cool! 😎

The only thing that’s left is to also handle whenever an external display gets disconnected. To do that, we’ll observe another notification - UIScreen.didDisconnectNotification - and perform any cleanup needed in our handler. In our case, we simply need to remove our reference to the external UIWindow and nil out our editor view controller’s previewer - like this:

NotificationCenter . default . addObserver ( forName: UIScreen . didDisconnectNotification , object: nil , queue: . main ) { [ weak self ] _ in self ?. window = nil self ?. editorViewController . previewer = nil }

The use of external displays might remain a niche use case, at least for a while, but the fact that we can easily use an external display to render anything we want is pretty exciting - and opens up a ton of possibilities for many kinds of apps. A video editor might show the full video on the big screen, a game might use the iPad as a controller and render the game itself on the external display, and a code editor might let you live edit a program while it’s running on the other screen, and so on.

Finally, let’s take a look at how we can easily add support for keyboard shortcuts on iOS. Just like with external displays, this is not a feature that’s unique to the iPad Pro - but the fact that many people are actually connecting keyboards to their iPad makes keyboard shortcuts more relevant than ever before.

On iOS, keyboard shortcuts are represented using UIKeyCommand and are implemented on top of the responder chain. That means that any participant in the responder chain - from the top all the way down to the current first responder - is able to declare new keyboard shortcuts.

Let’s continue with the editor example from before, and add two keyboard shortcuts that’ll let the user either create a new document by pressing ⌘N , or open a document picker using ⌘O . To do that, first let’s add a factory method to our EditorViewController that produces an array containing the key commands that we want to support, like this:

private extension EditorViewController { func makeKeyboardShortcuts() -> [ UIKeyCommand ] { let newDocumentCommand = UIKeyCommand ( input: "N" , modifierFlags: . command , action: #selector (createNewDocument), discoverabilityTitle: "Create a new document" ) let openDocumentCommand = UIKeyCommand ( input: "O" , modifierFlags: . command , action: #selector (showDocumentPicker), discoverabilityTitle: "Open a document" ) return [newDocumentCommand, openDocumentCommand] } }

To pass our new key commands to the system, all we have to do is to override our view controller’s keyCommands property and return our array of commands. To avoid having to re-create the same key commands multiple times, and to only do so when actually needed, we’ll use a lazy property to keep track of them internally as well:

class EditorViewController: UIViewController { override var keyCommands: [ UIKeyCommand ]? { return keyboardShortcuts } private lazy var keyboardShortcuts = makeKeyboardShortcuts () }

With the above in place, our app now responds to its first two keyboard shortcuts, and adding new ones is just a matter of adding new instances of UIKeyCommand to the array produced by our factory method. Nice and easy! 🎉

For me, the iPad - and especially the new iPad Pro - is an incredibly inspiring device. Its big screen, fast internals and accessories like the Apple Pencil really make it into a powerful canvas for us developers to use, and hopefully more teams will take advantage of those capabilities going forward.

Being able to work on iPad-specific features can sometimes be difficult, and I know that some teams find it hard to prioritize work like that. However, adding support for some of the iPad's more powerful features doesn't always have to be a huge amount of work, and can sometimes be a great way to delight users - and make our apps more capable - with little effort.

What do you think? Have you already added support for some of these iPad-specific features in your app, or is it something that you'll try out - and what do you think about the iPad as a development platform as a whole? Let me know - along with your questions, comments and feedback - on Twitter @johnsundell.

Thanks for reading! 🚀