This is the fourth and final post in a series about proxy iterators, the limitations of the existing STL iterator concept hierarchy, and what could be done about it. The first three posts describe the problems of proxy iterators, the way to swap and move their elements, and how to rigorously define what an Iterator is.

This time around I’ll be focusing on the final problem: how to properly constrain the higher-order algorithms so that they work with proxy iterators.

A Unique Algorithm

In this post, I’ll be looking at one algorithm in particular and how it interacts with proxy iterators: unique_copy . Here is its prototype:

template <class InIter, class OutIter, class Fn> OutIter unique_copy(InIter first, InIter last, OutIter result, Fn binary_pred);

This algorithm copies elements from one range to another, skipping adjacent elements that are equal, using a predicate for the comparison.

Consider the following invocation:

std::stringstream sin{"1 1 2 3 3 3 4 5"}; unique_copy( std::istream_iterator<int>{sin}, std::istream_iterator<int>{}, std::ostream_iterator<int>{std::cout, " "}, std::equal_to<int>{} );

This reads a bunch of ints from sin and writes the unique ones to cout . Simple, right? This code prints:

1 2 3 4 5

Think for a minute how you would implement unique_copy . First you read an int from the stream. Then you write it out to the other stream. Then you read another int. You want to compare it to the last one. Ah! You need to save the last element locally so that you can do the comparisons. Interesting.

When I really want to understand how some part of the STL works, I check out how the feature is implemented in ye olde SGI STL. This codebase is so old, it may have first been written on parchment and compiled by monks. But it’s the cleanest and most straightforward STL implementation I know, and I recommend reading it through. Here, modulo some edits for readability, is the relevant part of unique_copy :

// Copyright (c) 1994 // Hewlett-Packard Company // Copyright (c) 1996 // Silicon Graphics Computer Systems, Inc. template <class InIter, class OutIter, class Fn, class _Tp> OutIter __unique_copy(InIter first, InIter last, OutIter result, Fn binary_pred, _Tp*) { _Tp value = *first; *result = value; while (++first != last) if (!binary_pred(value, *first)) { value = *first; *++result = value; } return ++result; }

(The calling code ensures that first != last , which explains why this code skips that check. And the strange _Tp* argument is so that the iterator’s value type can be deduced; the monks couldn’t compile traits classes.) Note the value local variable on line 11, and especially note line 14, where it passes a value and a reference to binary_pred . Keep that in mind because it’s important!

The Plot Thickens

You probably know more about unique_copy now than you ever cared to. Why do I bring it up? Because it’s super problematic when used with proxy iterators. Think about what happens when you try to pass vector<bool>::iterator to the above __unique_copy function:

std::vector<bool> vb{true, true, false, false}; using R = std::vector<bool>::reference; __unique_copy( vb.begin(), vb.end(), std::ostream_iterator<bool>{std::cout, " "}, [](R b1, R b2) { return b1 == b2; }, (bool*)0 );

This should write a “true” and a “false” to cout , but it doesn’t compile. Why? The lambda is expecting to be passed two objects of vector<bool> ‘s proxy reference type, but remember how __unique_copy calls the predicate:

if (!binary_pred(value, *first)) { /*...*/

That’s a bool& and a vector<bool>::reference . Ouch!

They’re just bools, and bools are cheap to copy, so take them by value. Problem solved. Well, sure, but what if they weren’t bools? What if we proxied a sequence of things that are expensive to copy? Now the problem is harder.

So for lack of anything better (and pretending that bools are expensive to copy, bear with me), you write the lambda like this:

[](bool& b1, R b2) { return b1 == b2; }

Yuk. Now you port this code to another STL that happens to call the predicate with reversed arguments and the code breaks again. 🙁

My point is this: once we introduce proxy iterators into the mix, it becomes non-obvious how to define predicates for use with the algorithms. Sometimes the algorithms call the predicates with references, sometimes with values, and sometimes — like unique_copy — with a mix of both. Algorithms like sort first call the predicate one way, and then later call it another way. Vive la différence!

A Common Fix

This problem has a very simple solution in C++14: a generic lambda. We can write the above code simply, portably, and optimally as follows:

std::vector<bool> vb{true, true, false, false}; std::unique_copy( vb.begin(), vb.end(), std::ostream_iterator<bool>{std::cout, " "}, [](auto&& b1, auto&& b2) { return b1 == b2; } );

No matter what unique_copy throws at this predicate, it will accommodate it with grace and style.

But still. Polymorphic function objects feel like a big hammer. Some designs require monomorphic functions, like std::function or virtuals, or maybe even a function pointer if you have to interface with C. My point is, it feels wrong for the STL to require the use of a polymorphic function for correctness.

To restate the problem, we don’t know how to write a monomorphic predicate for unique_copy when our sequence is proxied because value_type& may not convert to reference , and reference may not convert to value_type& . If only there were some other type, some other reference-like type, they could both convert to…

But there is! If you read my last post, you know about common_reference , a trait that computes a reference-like type (possibly a proxy) to which two other references can bind (or convert). In order for a proxy iterator to model the Iterator concept, I required that an iterator’s reference type and its value_type& must share a common reference. At the time, I insinuated that the only use for such a type is to satisfy the concept-checking machinery. But there’s another use for it: the common reference is the type we could use to define our monomorphic predicate.

I can imagine a future STL providing the following trait:

// An iterator's common reference type: template <InputIterator I> using iterator_common_reference_t = common_reference_t< typename iterator_traits<I>::value_type & typename iterator_traits<I>::reference>;

We could use that trait to write the predicate as follows:

using I = vector<bool>::iterator; using C = iterator_common_reference_t<I>; auto binary_pred = [](C r1, C r2) { return r1 == r2; };

That’s certainly a fair bit of hoop-jumping just to define a predicate. But it’s not some new complexity that I’m introducing. unique_copy and vector<bool> have been there since 1998. I’m just trying to make them play nice.

And these hoops almost never need to be jumped. You’ll only need to use the common reference type when all of the following are true: (a) you are dealing with a proxied sequence (or are writing generic code that could deal with proxied sequences), (b) taking the arguments by value is undesirable, and (c) using a polymorphic function is impossible or impractical for some reason. I wouldn’t think that’s very often.

Algorithm Constraints

So that’s how things look from the perspective of the end user. How do they look from the other side, from the perspective of the algorithm author? In particular, how should unique_copy look once we use Concepts Lite to constrain the algorithm?

The Palo Alto TR takes a stab at it. Here is how it constrains unique_copy :

template <InputIterator I, WeaklyIncrementable Out, Semiregular R> requires Relation<R, ValueType<I>, ValueType<I>> && IndirectlyCopyable<I, Out> Out unique_copy(I first, I last, Out result, R comp);

There’s a lot going on there, but the relevant part is Relation<R, ValueType<I>, ValueType<I>> . In other words, the type R must be an equivalence relation that accepts arguments of the range’s value type. For all the reasons we’ve discussed, that doesn’t work when dealing with a proxied range like vector<bool> .

So what should the constraint be? Maybe it should be Relation<R, ValueType<I>, Reference<I>> ? But no, unique_copy doesn’t always need to copy a value into a local. Only when neither the input nor the output iterators model ForwardIterator. So sometimes the unique_copy calls the predicate like pred(*i,*j) and sometimes like pred(value, *i) . The constraint has to be general enough to accommodate that.

Maybe it could also use the iterator’s common reference type? What if we constrained unique_copy like this:

template <InputIterator I, WeaklyIncrementable Out, Semiregular R> requires Relation<R, CommonReferenceType<I>, CommonReferenceType<I>> && IndirectlyCopyable<I, Out> Out unique_copy(I first, I last, Out result, R comp);

This constraint make a promise to callers: “I will only pass objects of type CommonReferenceType<I> to the predicate.” But that’s a lie. It’s not how unique_copy is actually implemented. We could change the implementation to fulfill this promise by casting the arguments before passing them to the predicate, but that’s ugly and potentially inefficient.

Really, I think we have to check that the predicate is callable with all the possible combinations of values and references. That sucks, but I don’t see a better option. With some pruning, these are the checks that I think matter enough to be required:

Relation<R, ValueType<I>, ValueType<I>> && Relation<R, ValueType<I>, ReferenceType<I>> && Relation<R, ReferenceType<I>, ValueType<I>> && Relation<R, ReferenceType<I>, ReferenceType<I>> && Relation<R, CommonReferenceType<I>, CommonReferenceType<I>>

As an implementer, I don’t want to write all that, and our users don’t want to read it, so we can bundle it up nice and neat:

IndirectRelation<R, I, I>

That’s easier on the eyes and on the brain.

Interesting Indirect Invokable Implications

In short, I think that everywhere the algorithms take a function, predicate, or relation, we should add a constraint like IndirectFunction , IndirectPredicate , or IndirectRelation . These concepts will require that the function is callable with a cross-product of values and references, with an extra requirement that the function is also callable with arguments of the common reference type.

This might seem very strict, but for non-proxy iterators, it adds exactly zero new requirements. And even for proxy iterators, it’s only saying in code the things that necessarily had to be true anyway. Rather than making things harder, the common reference type makes them easier: if your predicate takes arguments by the common reference type, all the checks succeed, guaranteed.

It’s possible that the common reference type is inefficient to use. For instance, the common reference type between bool& and vector<bool>::reference is likely to be a variant type. In that case, you might not want your predicate to take arguments by the common reference. Instead, you’d want to use a generic lambda, or define a function object with the necessary overloads. The concept checking will tell you if you forgot any overloads, ensuring that your code is correct and portable.

Summary

That’s the theory. I implemented all this in my Range-v3 library. Now I can sort a zip range of unique_ptr s. So cool.

Here, in short, are the changes we would need to make the STL fully support proxy iterators:

The algorithms need to use iter_swap consistently whenever elements need to be swapped. iter_swap should be a documented customization point. We need an iter_move customization point so that elements can be moved out of and back into sequence. This gives iterators a new rvalue_reference associated type. We need a new common_reference trait that, like common_type , can be specialized on user-defined types. All iterators need to guarantee that their value_type and reference associated types share a common reference. Likewise for value_type / rvalue_reference , and for reference / rvalue_reference . We need IndirectFunction , IndirectPredicate , and IndirectRelation concepts as described above. The higher-order algorithms should be constrained with them.

From the end users’ perspective, not a lot changes. All existing code works as it did before, and all iterators that are valid today continue being valid in the future. Some proxy iterators, like vector<bool> ‘s, would need some small changes to model the Iterator concept, but afterward those iterators are on equal footing with all the other iterators for the first time ever. Code that deals with proxy sequences might need to use common_reference when defining predicates, or they might need to use a generic lambda instead.

So that’s it. To the best of my knowledge, this is the first comprehensive solution to the proxy iterator problem, a problem we’ve lived with from day one, and which only promises to get worse with the introduction of range views. There’s some complexity for sure, but the complexity seems to be necessary and inherent. And honestly I don’t think it’s all that bad.

Future Directions

I’m unsure where this goes from here. I plan to sit on it for a bit to see if any better solutions come along. There’s been some murmuring about a possible language solution for proxy references, but there is inherent complexity to proxy iterators, and it’s not clear to me at this point how a language solution would help.

I’m currently working on what I believe will be the first draft of a Ranges TS. That paper will not address the proxy iterator problem. I could imagine writing a future paper that proposes the changes I suggest above. Before I do that, I would probably try to start a discussion on the committee mailing lists to feel people out. If any committee members are reading this, feel free to comment below.

Thanks for following along, and thanks for all your encouraging and thought-provoking comments. Things in the C++ world are moving fast these days. It’s tough to keep up with it all. I feel blessed that you all have invested so much time exploring these issues with me. <3

As always, you can find all code described here in my range-v3 repo on github.