WWDC’19 Trip Report

I was fortunate to “win” registration at Apple’s World-Wide Developers Conference (AKA Dub-Dub) this year. Judging by the show of hands during the keynote, there were quite a lot of first-timers like myself, so I wouldn’t be surprised if Apple dialed the odds for newbies to make that happen. Here are my impressions of it in a non-technical glance at a conference, followed by a few technical sections on platforms, Swift, C++, Xcode and Machine Learning.

Conference at a Glance

According to some of the staff members running the show, there were around 5500 attendees, which with Apple employees and staff hovered around 7000 people, overall. Developers (blue badge holders) were treated as rock stars with staff cheering everyone when they were getting in, out or simply passing by. It felt like they hired hundreds of Steve Ballmers to run around and cheer everyone up. Interestingly, the youngest few attendees I’ve seen were probably not even 15 years old!

There are few ways one can get to attend WWDC. Some companies (possibly those with presence in the AppStore) do get a certain number of tickets to attend (not sure if they must pay for those). That number is not huge: Microsoft gets 10, I heard. If you aren’t lucky to be an employee of one of these companies, you can try your luck in the lottery for regular registration. Winning that lottery will set you off by $1600. Apple also pays quite a number of technology bloggers and journalists to attend the conference: besides having all their costs and registration covered, they get a different kind of badge and a front-line access to all the events. Finally, Apple employees can attend by giving a talk or doing a bit of volunteering. They had to wait in a separate line with the end-of-line access to all the events.

The conference lasted 5 full days and consisted of sessions split into 4 parallel tracks of 40–60 min long talks along with separate labs where people could have any questions answered or specific problems with their apps looked at by Apple engineers. Somewhat surprisingly, sessions didn’t have any Q&A after them, which is apparently what they changed deliberately 4 years ago. The first day consisted entirely of 2 very long keynotes (Keynote, Platforms State of the Union) and Apple Design Awards ceremony with no labs. Most events took place in the morning 9am–12pm or afternoon 2pm–6pm sessions, but quite a few were scheduled outside those hours. Early in the morning they had some fitness activities promoting their health apps, during lunch they had a few invited talks that weren’t recorded and two of the evenings had live band with drinks and the gala bash with Weezer as guest artist. All extracurricular activities had magnetic emoji pins people could earn by attending, and quite a few people were actively hunting to collect all of those. There were Facebook groups set up acting primarily as pin exchanges, so the whole idea was quite a hit. Apple provided boxed lunches, breakfast pastries and snacks during the breaks, including some kosher options which were distributed separately and guarded!

Apple being Apple, few things expectedly stood out. All presentations followed the exact same template and were probably checked by Jonathan Ive himself for absence of any divergence from the style. They announced some new symbols support for all of their platforms at the conference and all the presentations religiously used those symbols in relevant places. At the end of each presentation they had very useful links to other related sessions at this and previous WWDC conferences. Noticeably, there was a significant number of visually impaired and disabled attendees at the conference and a large number of sessions, labs and lunchtime gatherings dedicated to accessibility in their own platforms and 3rd party apps. One of the lunchtime invited speakers was famous blind architect Chris Downey, who lost sight at 45 and now works on making buildings more accessible to visually impaired and blind people. Another lunchtime gathering was discussing gender neutrality in apps and allowed people to check their apps for any gender biases.

Apple Design Awards were probably as expected as the previous 2 keynotes, which is likely why all 3 events are packed into the first day for all the attendees. The ceremony is like the Oscars of the app world — winning it gets huge recognition and bragging rights to both the app and its creators, not counting the jackpot they will get from the exposure. Interestingly, quite a few attendees I talked to boycotted it under the premise that a disproportionate number of awards go to games. This year there were 9 awards, 6 of which were given to games. I didn’t know what to expect since I’ve never seen this ceremony before, but I assumed that the lion’s share of the awards will go to blockbusters made on multi-million dollar budgets. Surprisingly, only one such award was given while the rest went to pretty small indie teams. I’m not going to spoil who won what, but I’ll say that right after the ceremony there was a category for Apple Design Awards among editor picks on the AppStore. For those curious on what the selection process looks like and what the committee is looking for in apps that deserve this honor, there was a session on Designing Award Winning Apps and Games.

One feature that was introduced across all of Apple platforms this year was Sign-in with Apple. Their new federated identity framework will not share any user details with the app and allows user to generate an app-specific throwaway forwarding email to hide her actual email address from the app. According to updated AppStore Review Guidelines: “sign-in with Apple will be required as an option for users in apps that support third-party sign-in when it is commercially available later this year”.

New macOS 10.15 Catalina will finally split iTunes into multiple apps: Apple Music, Apple Podcasts and Apple TV. The upgrade will also feature a read-only system volume to protect OS and perhaps make it more suitable for containerization. They also pushed further notarization —an automated Mac AppStore service that checks and certifies developer’s Mac apps for absence of certain kinds of malicious behaviors prior to the official app review. UIKit framework was brought from iOS to macOS in a development that deserves its own paragraph.

Apple has been introducing features to iPad that weren’t available on iPhone for a while now: split view, slide over, drag-n-drop etc. This time they took the bullet and split the software into its own iPadOS. Among other things, it supports the above mentioned multitasking features, new 3-finger productivity gestures, PencilKit for low latency drawing in apps and some UI features that were previously only available on macOS. With the addition of features emulating many of the desktop interactions to iPadOS and the above mentioned port of UIKit to macOS, Apple was encouraging developers to port their iPad apps for Mac. Xcode now has a new check-box in its project properties that will create a macOS target based entirely on the existing code for iOS that was used to target iPad. They then provide additional guidelines for Taking iPad Apps for Mac to the Next Level, but claim that the checkbox only solution will already be functional. Custom frameworks built for iPad will need a Mac version, but source frameworks will simply be rebuilt for Mac.

iOS got support for a dark mode this year and there were a number of talks to let apps adopt it: Implementing Dark Mode on iOS, Supporting Dark Mode in Your Web Content, What’s New in iOS Design. 3D touch that they introduced to much fanfare in iPhone 6 seems to become deprecated as very few users knew about its existence. I personally heard about it, but never knew when I can use it because UI elements don’t have any distinction when 3D touch can be applied. Instead, the 3D touch will be universally replaced by long touch, which will also be available on devices without 3D touch capabilities. Another great addition was a built-in support of swipe typing, which before was only available via 3rd party keyboards, like Microsoft-acquired SwiftKey. Maps got updated with 3D lidar built maps plus users can now allow apps to access their map location only once. They’ve also added support for indoor navigation to iOS allowing the apps to use it. Their text-to-speech APIs, available to apps, were also significantly improved: instead of stitching sounds, which is the technology that was used before, they now use a technology based on neural nets, which makes it almost indistinguishable from real speech.

The biggest announcement for watchOS was that it will get its own on-device AppStore and thus wouldn’t require iPhone anymore to download apps (not sure whether they would still require iPhone to register your watch). One would also be able to write independent watchOS apps that do not require a tethered iPhone app. With this separation, they introduced new audio streaming APIs on watch (I guess audio is one of the most frequent use cases on watch). There was also a slew of new watch apps, that previously existed as 3rd party apps: an audiobook app, a long overdue voice memo app, which for some reason they resisted adding till now and a cycle health app for women.

I don’t remember many announcements about tvOS, besides the fact that they got new beautiful underwater screen savers, but I later heard that they will be adding support of Xbox One S and PlayStation 4 controllers to tvOS, iOS and macOS, and I quote: “because they are the best game controllers in the industry”.

Swift

According to the keynote, there are about 450,000 apps in the AppStore built with Swift. This year Swift got a number of additions, which were mainly covered in the What’s New in Swift session. Most notably, it now provides ABI stability and module stability (the latter is a forward-compatibility of the library interface with new compilers, a form of no breaking changes guarantee), which allowed them to introduce shared run-time, packages and binary frameworks. In that push they switched all their strings from UTF-16 to UTF-8 and their string representation now includes small string optimization. Along with some other compiler optimizations, they now claim 15 times faster interop with Objective-C’s NSString and 20% speedup on their typical benchmarks. The language has been extended with NSDMI-like syntax, implicit returns, opaque result types, property wrapper types, SIMD operators (point-wise extensions) and a pretty cool facility for embedded DSLs within the language (currently in Beta), which powers their other major addition this year — SwiftUI. They have also implemented Language Service Protocol for all of their C-family languages to be used by 3rd party editors and tools.

The addition that got the most coverage this year was undoubtedly SwiftUI. SwiftUI is a declarative syntax based on the above-mentioned embedded DSL proposal, that is harnessed to declare UI elements. For an analogy, think of XAML-like (HTML, or any other UI) language embedded into a C++ DSL via a combination of expression templates, initializer lists, designated initializers, constructor calls optionally followed by various modification methods that all return reference to the object to allow for chaining etc. — essentially anything that allows you to capture an expression tree and assign it alternative semantics in the language. You can get a pretty good idea about the internals by reading Inside SwiftUI’s Declarative Syntax’s Compiler Magic. SwiftUI is supported on all Apple’s platforms and will render common elements according to the best practices of target platform, while also allowing to customize for its specifics. Xcode supports WYSIWYG editing of SwiftUI previews as well as a dedicated support beyond the usual debugging facilities in the Xcode debugger. And if that wasn’t enough SwiftUI videos for you, there were also: SwiftUI Essentials, Data Flow through SwiftUI, Integrating SwiftUI into existing apps, Building Custom Views in SwiftUI, Accessibility in SwiftUI and SwiftUI on watchOS.

C++

For all the languages Apple supports, there was only one talk not presented in Swift: What’s New in Clang and LLVM. I’m kidding, it was actually 1.5 talks: Metal shaders are also written in C++. The presence of C++ was very much felt in the above mentioned new Swift features though, thanks to Doug Gregor of C++ fame.

Overall it was an interesting talk, but it certainly didn’t match all the hoopla of Swift talks. It started with how Apple uses LLVM bit-code to target 32- and 64-bit devices from the AppStore. Then it talked about new size optimization switch -Oz and some specific Objective-C space savings they implemented: meta-data folding (2–7%), hard-coding member offsets in instance variables of direct subclasses of NSObject (looked up at run-time in Objective-C) (2%). One accidental C++ space optimization was in that they stopped force-inlining libc++ STL container methods because their debugger was getting confused by the line numbers in the inlined code, so expectedly they got a space optimization from all those functions they haven’t inlined (7%). They also allow now to suppress destructors of global objects (either via attribute or compiler setting) because destruction of global objects doesn’t fit well into their application life-cycle: foreground/background/suspended etc. (1%).

Further they discussed some diagnostics they’ve added: calling pure virtual function in constructors and destructors, mixing size and filler arguments of memset, detection of some cases of object slicing in which move won’t happen from a local object upon return, suggestion to use std::size instead of sizeof/sizeof idiom, warning about explicitly defaulted constructor getting implicitly deleted and why etc.

Last part was about new Clang static analyzer checks: use after move bugs, dangling std::string::c_str() pointers and reference counting bugs in DriverKit and IOKit. The latter ones are not only specific to Objective-C, they also take into account some naming conventions and tacit knowledge of which methods are expected to return a pointer with incremented counter and which aren’t.

There were also 2 Clang/LLVM labs people could come to ask their questions, but those are individual, so it wasn’t useful for gathering any data on developers’ pain points and needs.

Xcode

Xcode 11 adds some long overdue features that were available in other IDEs: mini map, horizontal code separators that will also be shown in member drop downs, customizable layout of editors, synchronization of documentation snippets with actual arguments, inline diff of changes from the repo, code review window with side-by-side diff and more.

Swift package manager has been given a first-class integration into Xcode and works directly with GitHub, GitLab and BitBucket. There is a new history inspector for source control and they now expose stashing and cherry-picking via menus.

UI designers have been updated to support dark mode specializations. They also feature environment overrides that allow developer to override some of the accessibility settings without making actual change on the device. There are also new device conditions: network state and thermal state, which you can start or stop manually to test your apps under those conditions.

With independent watch app, no iPhone simulator is needed for watch simulator anymore either. The iPhone simulator is now built on Metal, so apps using Metal run much faster. The simulator is 2x faster on warm boots and can provide 60 fps 90% of the time.

There has been some improvements with Testing in Xcode, although I didn’t attend the talk due to overlap and from other talks didn’t quite get whether this was a brand new feature or an improvement over an existing one. Instruments were also improved to allow hierarchical custom data that can be particularly useful in measuring nested calls etc.

Machine Learning

Machine Learning got a lot of coverage this year as well, and for a good reason: Apple now supports hardware accelerated on-device training of ML models on all their platforms. This is important (and they kept repeating these points in almost every talk) for 2 reasons:

Privacy: user’s data now doesn’t have to leave the device for training

Personalization: a model can be customized to user’s needs

Currently the typical workflow is that developers train their models offline and deploy the same model to every user. With model personalization the users still start with a general (developer-supplied) model, however now they can fine-tune it on the device, without the data ever leaving it. For example, instead of detecting all dogs in the photos I may be interested in detecting my dog in all the photos. The math behind model personalization is reusable and is now a part of their Core ML 3 Framework for all developers to use.

Creating ML models is also very simple: Create ML is integrated directly into Xcode and can deal with 5 kinds of data: image, sound, activity, text and tabular. They did some of the NIST benchmarks for a demo within Create ML without ever doing any coding. It provides numerous classifiers out of the box:

Image Classifier — type of image, art style etc.

Sound Classifier — supports style transfer etc.

Activity Classifier — deals with motion data (e.g. swim stroke recognizer in workout app on watch)

Text Classifier — can label text based on its content. Works with words, sentences, paragraphs or articles

Tabular Classifier — categorizes a sample by feature of interest, uses best of multiple classifiers. Also includes tabular regressor that can quantify samples by feature of interest.

They also provide domain-specific APIs based on their ML framework: Text Recognition in Vision Framework, Natural Language Framework, Image Saliency and Classification etc.

Conclusions

These were some of the broader areas I was curious about and attended sessions in. There were actually 4 times more sessions than I could physically attend, so definitely check WWDC’19 website for all the videos.

Even though one does not really need to attend the conference to see all the talks, the reality is I personally never have time to watch them, so this was a great opportunity to learn more about Apple’s platforms and developer tools in just 5 days. Honestly, I think after these 5 days I know more about the state of Apple’s ecosystem than I know about the state of the Windows ecosystem, so attending a developers conference can also be viewed as a great on-boarding tool. It’s also a great networking opportunity and a way to spend time off work, so you should definitely check it out in the future — I know I will!