The SafeCurves Scare:

Why SafeCurves is a misnomer

2020-04-19, updated 2020-04-20

Target audience

To really follow along with this blog post, you should be aware of the basics of elliptic-curve cryptography (hereinafter abbreviated as ECC). If you need this background, lay down this piece for a few days, and check out ECCHacks.

I am, indeed, aware of the irony of linking the introduction to elliptic curves by the same author whose SafeCurves website I am about to criticize, but nonetheless ECCHacks is probably the best and most straightforward introduction to ECC that I am aware of. You'll probably need a few days to actually digest what's going on in there though.

For the sake of being thorough, I'll add a few of my own cliff notes that pick up where ECCHacks leaves off just so we're all on a level playing field: For one, there are types of curve equations other than the Edwards curves introduced by ECCHacks; namely, those are Montgomery curves and (short) Weierstrass curves. Montgomery curves are notable for having efficient arithmetic that uses x-only arithmetic (i.e. the y-coordinate is never actually required), which makes it a very attractive kind of curve for key agreement using elliptic curve Diffie–Hellman (ECDH). Weierstrass curves are those you'll usually find in ECC standards (with a variation called Koblitz curves that omit one Weierstrass parameter entirely). The point addition formulas differ for each type of curve.

What is SafeCurves?

safe

From what I can tell, the target audience for SafeCurves is fairly narrow: people designing new cryptographic protocols needing to choose which curves to allow that have no requirements for backwards compatibility with anything, as well as people coming up with new curves trying to make sure they don't repeat known mistakes.

Why SafeCurves is effectively meaningless for many people

For anyone falling outside of those categories, however, the website is of little use, yet the alarmist wording and presentation makes it a ripe target for confusion. Please allow me to detail why I believe this to be the case.

SafeCurves conflates performance and security arbitrarily

SaveCurves dismisses curves that have known ways to be safe because the safe ways are slow, usually citing a conflict of interest. For example, the page on ladders specifically mandates that any safe curve be convertible to a curve expressed in Montgomery form so that particularly fast scalar multiplication is available. This requires the order of the curve to be divisible by 4; in other words, the criteria require curves with non-prime order. It dismisses all other curves and brands them unsafe because of this property. An alternative for classic curves in Weierstrass form exists, namely the Brier–Joye ladder. While SafeCurves does acknowledge the Brier–Joye ladder's existence, it is immediately dismissed by saying that the Brier–Joye ladder is much slower than the Montgomery ladder: […] It thus creates a conflict between efficiency and security. And yes, it is true that the ladder is slow.

Curves without prime order (namely Edwards and Montgomery curves, such as the lately popular Curve25519 and its corresponding Edwards25519) require extra work to be made safe and work around the cofactor instead. Loup Vaillant has written a good introduction to cofactors and elliptic curve groups that I wholeheartedly recommend; it explains in detail why cofactors are a nuisance. In particular, they require multiplication by the cofactor in specific places that can, essentially, only be identified by an expert. If you pick up any given elliptic curve protocol from a paper, chances are that one of the requirements for implementing it will be a curve of prime order. It is difficult to tell what the effect of a cofactor will be for any specific protocol when no precautions are taken. (Fortunately, Ristretto exists to release us from cofactor hell with Edwards and Montgomery curves, but it's both relatively new and conceptionally complex, plus not all people are comfortable with implementing this added complexity.)

This criterion of SafeCurves therefore actively makes a trade-off without actually specifying that it is: It considers it more likely that people will avoid the Brier–Joye ladder in favor of regular scalar multiplication, fail to make that constant-time and suffer an issue there instead of an issue related to cofactor. And that is probably the correct trade-off, but dismissing every short Weierstrass curve on that basis seems overkill to me. It makes the whole exercise effectively pointless because the comparison is no longer between curves but between classes of curves. At that point, you don't need to talk about NIST P-256 specifically anymore, and just outright post on the front page that you consider all short Weierstrass curves unsafe and list them out. The whole ECC security section is just an elaborate stunt that boils down to Montgomery and Edwards good, Weierstrass bad .

SafeCurves then makes the same trade-off for the section of completeness. Completeness refers to the property of whether addition formulas exist that work for all cases without modification. It outright dismisses Weierstrass on grounds of efficiency again because many of these formulas are considerably slower and more complicated than standard incomplete scalar-multiplication formulas, creating major conflicts between simplicity, efficiency, and security. While I certainly agree that having a complete formula that is 1.4× slower than a constant-time implementation that makes use of the fastest incomplete formulas for Weierstrass curves is suboptimal, the option does exist; when the SafeCurves page then goes on to require speed, after previously declaring efficiency out of scope, and because of that claims that there are no complete formulas for Weierstrass curves, that feels very arbitrary to me. If hypothetical implementers fail to make their scalar multiplication code constant-time or choose to forgo that for speed, they probably also fail to make the field code constant-time, so the issue becomes moot in any case.

Speed is, in fact, not always king. In the paper about TweetNaCl, you may note on p. 3 the claim that TweetNaCl is fast enough for typical applications. TweetNaCl is purely optimized for code size, but not for speed. More to the point, TweetNaCl is slow. It gets outperformed even by the relatively naïve implementation C25519 when it comes to signature verification (and stack usage!): A paper by Zandberg et al. notes on p. 11 that TweetNaCl was more than twice as slow as C25519 for signature verification. And we already know that this is fast enough for typical applications. I reiterate that the complete formulas for Weierstrass are slower only by a factor of 1.4×. Dismissing Weierstrass curves on these grounds seems flimsy at best.

SafeCurves requires indistinguishability, which is often irrelevant

Not only does SafeCurves already completely dismiss Weierstrass curves (on grounds of efficiency, even if claimed otherwise), but then it also goes on to require that curves support mapping to and from bit strings that are indistinguishable from random noise. Effectively, this requires a Montgomery curve (again) or a Hessian curve. The issue here is not the way the criterion is actually used, but rather its existence in the first place. A bi-directional mapping to and from uniformly random bit strings is a niche use case, period. The only time this is of any relevance is for high-stakes scenarios where the presence of cryptography itself must be concealed—and a convincing argument can be made that random noise itself is already more than enough to draw attention to yourself because very few people legitimately send each other PRNG seeds or literally useless random noise. A unidirectional mapping from random bit strings to a curve member has some additional uses, but is also supported on Weierstrass curve types. Additionally, Elligator Squared has been described now, which permits Weierstrass curves with prime order to have a bi-directional mapping if one is actually required.

SafeCurves ignores the most common use case: Pre-existing libraries

The biggest source of all misunderstandings, however, is that a curve that is labeled unsafe is almost never per se unsafe. The unsafe curves that just fail the ECC security requirements (read: all curves which are not Edwards or Montgomery curves) can be used safely, it's just a question of how difficult or how slow they are. There is no need to flee from a working, existing, mainstream implementation of e.g. NIST P-256, such as OpenSSL. The curve isn't inherently flawed; there's only an increased risk that off-beat implementations get something wrong. The cryptographic right answers rightfully already advise people to avoid offbeat libraries, so if people have been following those, everything on SafeCurves is, effectively, irrelevant to them.

Plus it's not like the SafeCurves criteria can possibly cover every implementation mistake. Recently, it was discovered that libgcrypt used a non-constant-time method to obtain multiplicative inverses in GF(p), causing a timing leak when converting coordinates from projective to affine coordinates. That could have happened with any curve.

Conclusion

SafeCurves is a misnomer and should be more careful with its wording. It should more accurately detail its target audience in the introduction as well to prevent misunderstandings. Finally, it is highly desirable that the website was more up front about its favoritism for Montgomery and Edwards curves; a reader should not need to dig through the individual criteria to find out whether their chosen curve is inherently unsafe or just caught in a trade-off between speed and difficulty of implementation.

But part of the sources of misunderstandings is also a lack of education of people who interact with elliptic curve cryptography as well. We need more accessible and more up-to-date resources for learning how to use and write elliptic curve cryptography.

That said, my personal recommendation for all new deployments is: Use Curve25519. Use Ristretto. If you can avoid dealing with FIPS requirements for a while, you'll possibly see Curve25519 being approved in the next few years. Then there really won't be any excuses to use NIST P-256 for new code ever again. But there's also no need to abandon working NIST curves with known-good implementations for all existing deployments.