An interview with Sean Parent

published at 06.07.2016 16:04 by Jens Weller

During C++Now I had the opportunity to start an interview with Sean Parent! I've met Sean for the first time in 2012, when he also gave a keynote at C++Now, and was always curious about his views on programming. He is known for a few outstanding talks and keynotes in the community. Originally I planned to film this interview at C++Now, but due to some AV Equipment not working properly, I do publish it in its written form, which also gave some room for extra questions, and Sean did have the proper time to answer each of them, thank you for this!

Some of these questions came from the community or attendees of C++Now, thanks for your inspiration!

Lets start with the Introduction, who is Sean Parent?

Introducing yourself is always the toughest question. I’ve been a software developer for nearly 30 years and I’ve been fortunate enough to work on some great products and with some great people. I started my career at a small company, Orange Micro, where I wrote a print spooler for Mac (before MultiFinder) and “hijacked” Apple’s ImageWriter printer drivers to work with a wide variety of printers. I worked at Apple in the QuickDraw GX printing group (GX didn’t survive but lives on in spirit in Skia) and I worked on the PowerMac team that did the transition from 68K processors to PowerPC. I joined the Photoshop team at Adobe during the development of Photoshop 3.0, and managed Adobe’s Software Technology Lab for many years. I worked briefly at Google on the ChromeOS project, and then returned to Adobe where I’ve been working on mobile and web digital imaging products since.

What is your role as a principal scientist at adobe?

The role of a principal scientist in general is to act as a multiplier. I act as a consultant for individuals and groups, I work on various products and projects where I perceive a need. I helped to bring the rendering technology from Lightroom and the Photoshop Camera Raw plugin to mobile, first for the Revel product (a now defunct Lightroom like product for non-professionals) and then for Lightroom Mobile. I also brought the engine up inside the browser which is now part of Lightroom Web.

How do you feel about the success and hype around some of your talks like "C++ Seasoning”?

I’m very proud of my talks and pleased that some have been very well received. I’ve also given some very bad talks but thankfully those seem to be quickly forgotten. “Seasoning” in particular has struck a cord of balancing theory and practice. I have to give Herb Sutter credit as he pushed me to give a talk where developers could take something home and apply it immediately. It is odd when I see online threads discussing what I did, or didn’t, mean by a particular sentence in a talk - as if I was so far removed that they couldn’t just send me an email and ask. Speaking has provides some amazing opportunities to travel and meet people. Last year I spoke in Moscow, and this year I’ll give a talk in Wroclaw, Poland. I’ve given talks at academic conferences, corporations, and universities. I always try and stay for longer than my talk, for the entire conference if possible, so I can chat informally with the people at the event. This is where I find new ideas and almost always come away learning something I didn’t know and meeting some amazing individuals.

What feature would you remove from C++, if you could?

If I could remove only one? It would probably be generalized argument-dependent lookup (sorry Andrew!). But it would have to be replaced with a new mechanism (is that allowed?). I want “semantic namespaces” and ADL works against that. The operators should default to be in the global space which would remove the primary need for ADL. Unfortunately ADL causes people to always qualify names and has a serious detrimental impact on code appearance and our ability to refine operations.

Your talks series is about better code, what is better code for you?

Better code is correct, efficient, and reusable. As an industry we produce so much code that is a one-off. It is such a waste of time and talent. I try to encourage every developer to write all code as if it were going to be part of a library, ideally part of the standard, and to use library components. Then take at least one piece of code you’ve written each year and propose it to a widely used library.

Beyond c++17, what feature gets you the most excited?

Concepts. The current concept proposal should have been part of C++17. My fear is that the extra freedom by the technical specification process will allow concepts to expand and they will be unnecessarily complicated. Time constraints can be a good thing.

What is your opinion on Garbage Collection?

I think garbage collectors, specifically tracing collectors, are of use for some specific problems but have no place as part of the general allocation scheme for a general purpose language. Tracing collectors make an unnecessary and incorrect tradeoff of performance for correctness where the result is frequently to hide correctness issues and almost always to introduce an unacceptable performance impact.

Is there something in C++, that you don't understand?

I can never keep the rvalue, prvalue, xvalue, and glvalue categories straight. On occasion I learn something new about the language that I didn’t know (or had forgotten) so clearly such instances point out something I didn’t understand previously. Which is a way to say that if I knew what I didn’t understand I’d make an attempt to learn it. Learning the language (and other parts of the system we use) is part of being professionally responsible.

What are your thoughts on Functional Programming?

I think there is much that developers can learn from functional programming, but FP should not be taken as a religion. The goal is to program the machine we have with correct and efficient code. Functional programming removes some of the efficient basis operations from our vocabulary to guarantee a particular kind of type and memory safety. For a given operation the functional form or the procedural form of the operation may be more efficient. If I can prove the correctness of the efficient form why shouldn’t I use it? The guarantees provided in functional programming are also highly overrated. Functional languages are still turing complete so I can provably make all the same mistakes.

I remember a slide from 2012, two shared pointers sharing a resource,

bound together by a heart like outline. Some time later I realized, that

this slide did not express your love for shared_ptr (a boost go to

solution for smartpointers prior to C++11), but was rather sarcastic

meant.

What are your thoughts on shared_ptr and other smart pointers?

shared_ptr is a useful tool for creating other types, but shouldn’t appear in an interface.

How should one interface with code like legacy libraries, that can't be

put in a better state?

If you understand how the basis operations of a regular type map to the conventions of the library (or language) you are using then you can write better code using that system. For example, if I’m coding on an Apple platform in Objective-C then Apple has the convention that if a function begins with alloc, new, or copy, then the object returned is not shared. Otherwise the object may be shared (there are exceptions where the object is only shared by an auto-release pool but since an auto-release pool doesn’t read or write the object we can treat it as singly owned). To avoid sharing and to allow us to reason about objects locally we can establish additional conventions (for background, in Objective-C you have reference counted objects where retain/release controls the reference count): Don’t retain an object unless it is immutable (if using ARC [Automatic Reference Counting], this translates to don’t store the object pointer except on the local stack for an object you don’t own unless the object is immutable). Copy non-immutable objects instead of sharing them. Following these simple rules allows you to reason about your code locally. Before ARC, I wrote all my code like this. I almost never called Retain() on an object explicitly and updating to ARC only took a few minutes to delete the handful of explicit calls to Retain() that I had.

I know you are not an active member of the C++ Committee. Given the two options, either to break things or to stay backward compatible with previous standards, which one do you favor?

Both. I think we need a standard versioning system for both the language and library. On the library side we have part of the mechanism with inline namespaces but we don’t have a standard convention for using them for versioning. For the language we don’t have a standard mechanism but many compilers allow us to select the language version. I’d like to be able to write code like: using cpp20 { // C++20 code } using cpp17 { // C++17 code } With a requirement that a conforming compiler supports older versions. Then the language and library can be free to break things aggressively in order to correct previous mistakes. Although we would need to define what is compatible between language versions, so not everything would be open for change, the amount that could be changed would increase substantially.

Your C++Now keynote was also about reasoning with strange code, what are your thoughts and motivations about this?

In my keynote I used the process of learning a new code base as a device to discuss what good code is. The process of how people go about creating a mental model of any large system is an interesting area of study. For code, I try to consciously build an idealized model and use that to understand both how the actual code works as well as how it should work. It is not, however, an area I’ve studied, it is just how I approach the problem.

Style Guides and coding guidelines have always been popular, be it Googles famous coding rules or the new GSL, what are your thoughts on these?

The Google guidelines contain both style and coding practice guidelines and are generally horrible in both areas. The C++ Core Guidelines and GSL do a much better job by trying to focus on how to avoid common errors and provide additional information to the compiler and reader. I think enforcing any coding practice as “law” (which is how Google behaves) is a mistake. The goal is code that is correct, efficient, and reusable and to the extent that guidelines work counter to those principals, they are wrong.

And related to this, what style and coding guidelines do you suggest?

Briefly: Write all code as if it where a library you intend to submit for standardization. Focus on the interface. Borrow, don’t invent. Write complete and efficient types. Use algorithms instead of loops. Avoid inheritance and owning pointers in your interface. Make your data structures explicit. Use a task system, message queues, futures with continuations, and parallel algorithms, instead of threads, mutexes, semaphores, and condition variables. It is important to be able to scale down to a single thread as well as up to many. Embrace nothingness. Finally, don’t worry about how much space there is around parentheses, what line the curly brace goes, or if you use spaces or tabs. These are bike-shed arguments.

I know you have presented about the usefulness of destructive move, do you favor this as the default solution?

I’m not in favor of a destructive move. At C++Now I gave a talk where I used move, as currently defined by C++, as an example of an unsafe and inefficient operation. Although those words have negative connotations, they have precise meaning. However, many people misconstrued those statements to mean “bad”. The thrust of my talk was about writing complete and efficient types but it happened to follow a talk from Eric Niebler, where he discussed the standard requirements for a moved from type. I argued that despite what the standard states, the only meaningful state for a moved from object is that it is partially formed (definition in Elements of Programming) or destructed (a destructive move). The standard requirements cannot actually be satisfied and attempting to satisfy them just leads to problems. Destructive move could be safer in more cases and more efficient, but also would require a significant rework of object lifetimes in the standard, and the resulting complexity is not, in my opinion, worth the change. There is a long discussion of it here. The key point is that one cannot have both absolute efficiency and absolute safety. We must learn to take a structured approach to both efficiency and safety.

I want to thank Sean for answering all these questions in detail, it was a great exchange and doing this interview was lots of fun. Also, Sean is writing currently a book which might be already available this year. Regarding Meeting C++, we did talk about next years conference, details on this probably next year :)

Join the Meeting C++ patreon community!

This and other posts on Meeting C++ are enabled by my supporters on patreon!