Update: you can find the Part II of this series



In this blog post I am going to talk about some really cool Simon Masson, Verifiable Delay Functions (VDF from now on). I know at this point you are thinking that the title of this blog post was yet another clickbait link but I promise that if you bear with me until the end you are not going to be disappointed. If you never heard about VDF fret not I will try to ELI5 this concept. So fasten your seat belt.

The history of VDF is actually pretty neat indeed it seems that the concept was growing slowly through the years before finally being formalized. This is somehow evident looking at the links in

VDF were formally introduced by (the legendary) Bonneau, theoretically optimal VDF: you can find theof this series here In this blog post I am going to talk about some really cool cryptographic research done by Luca De Feo Christophe Petit and myself around a relatively new cryptographic construction calledfrom now on). I know at this point you are thinking that the title of this blog post was yet another clickbait link but I promise that if you bear with me until the end you are not going to be disappointed. If you never heard about VDF fret not I will try tothis concept. So fasten your seat belt.The history of VDF is actually pretty neat indeed it seems that the concept was growing slowly through the years before finally being formalized. This is somehow evident looking at the links in https://vdfresearch.org/ VDF were formally introduced by (the legendary) Boneh Bünz and Fish in a seminal paper less than a year ago (June 2018). The paper contained only some weak form of VDF construction (based on univariate permutation polynomials ) and motivated researchers to continue to look for

We still lack a theoretically optimal VDF, consisting of a simple inherently sequential function requiring low parallelism to compute but yet being very fast (e.g. logarithmic) to invert.

Krzysztof Pietrzak published their respective papers (more on this later) : Well it looks that this incentive worked even better than predicted indeed within 10 days two papers responded to the challenge. Firstly Benjamin Wesolowski thenpublished their respective papers (more on this later) :

But let's shift down the gear and keep things in order.

Time lock puzzle The first construction that might resemble a VDF goes back to the '90s precisely to encrypt into the future, If you can't remember how RSA works here a quick informal RSA refresher:

RSA Refresher But what about this RSW paper though? Well now that we master RSA the concept is pretty simple. In order to encrypt something to the future it would be enough to have the exponent e being very very (very very) big. Or as big as needed (it depends how long into the future we want to encrypt). Now it is clear that unless the message sender would know the (secret) factorization of N he needs to go through all the powering sequential steps in order to encrypt the message: From the other end if the factorization of N would be known a shortcut exists: the exponent can be reduced to e mod (phi(N))

Time lock puzzle Verifiable Delay Functions (VDF) Fast forwarding 20 years

Rigged(???) lottery

Bonneau in his talk introduced a concept of Verifiable Lotteries where there is a service that regularly publishes random values which no party can predict and everyone can verify. Back then he did not have an effective construction in his hand but in order for his solution to work he needed that the random extraction to be slow and not parallelizable while the verification should have been immediate. He then displayed some really nice math trick from yet another

Modular square root Clear no? Compute the modular square root is pretty simple but sequential and the running time is logarithmically bigger a p grows. From the other end the verification is immediate. All this comes with a caveat: it turns out that the computation phase is actually parallelizable. So back to square one. At this point all was ready for the VDF idea to be finally formalized (and this happened in the cited The first construction that might resemble a VDF goes back to the '90s precisely to Rivest, Shamir and Wagner's paper (RSW). This paper is heavily based on the famous RSA construction and introduced the concept of, If you can't remember how RSA works here a quick informal RSA refresher:But what about this RSW paper though? Well now that we master RSA the concept is pretty simple. In order to encrypt something to the future it would be enough to have the exponentbeing very very (very very) big. Or as big as needed (it depends how long into the future we want to encrypt). Now it is clear that unless the message sender would know the (secret) factorization ofhe needs to go through all the powering sequential steps in order to encrypt the message: From the other end if the factorization ofwould be known a shortcut exists: the exponent can be reduced toFast forwarding 20 years Joseph Bonneau gave a really inspiring talk about Verifiable lotteries. This concept inspired by the 1981 Rabin' s paper about Random Beacon conceived to avoid situation like this:Bonneau in his talk introduced a concept ofwhere there is a service that regularly publishes random values which no party can predict and everyone can verify. Back then he did not have an effective construction in his hand but in order for his solution to work he needed that the random extraction to be slow and not parallelizable while the verification should have been immediate. He then displayed some really nice math trick from yet another paper by Dwork and Naor . The trick is as simple as beautiful:Clear no? Compute the modular square root is pretty simple but sequential and the running time is logarithmically bigger a p grows. From the other end the verification is immediate. All this comes with a caveat: it turns out that the computation phase is actually parallelizable. So back to square one. At this point all was ready for the VDF idea to be finally formalized (and this happened in the cited seminal paper) . But what this VDF is? Well I guess is time to introduce it finally.

Verifiable Delay Function and is (as the name says) a Function that:

Takes T steps to evaluate even with unbounded parallelism The output can be verified efficiently And so what? Why all this fuzz? Bear with me another bit an (I hope) all will be clear. It turns out that building a VDF minus any of the property listed in his name is kind of easy (credit to VDF stands forerifiableelayunction and is (as the name says) athat:And so what? Why all this fuzz? Bear with me another bit an (I hope) all will be clear. It turns out that building a VDF minus any of the property listed in his name is kind of easy (credit to Ben Fish for this analogy, I have seen this in his VDF presentation ):

Isogenies VDF

isogenies VDF :). (Briefly) On isogenies first. An isogeny is nothing else that a non-constant algebraic map between elliptic curves, preserving the point at infinity (informally speaking is a way to "travel" from an elliptic curve to another). In the "common" elliptic curve setting we are used to multiply a point to a scalar and land to yet another point in the same curve, In the isogeny based cryptography things are a bit different, again highly informally, you instead start from a curve and end your journey to another curve (after a serious of hops). Isogeny based cryptography started to gain popularity in the last years thanks to a celebrated SIDH) based on isogenies. The key fact of SIDH is that it appears to be resistant to quantum computers and his CCA version (Verifiable Delay Functions from Supersingular Isogenies and Pairings. In a nutshell we force the prover to perform a long walk between curves (the length of the walk is directly proportional to the time parameter T ) and we employ pairings to solve the fast verification (the pairing operation is not tight to the time parameter T ): Right. We are finally to the part I am more interested in::). (Briefly) On isogenies first. An isogeny is nothing else that a non-constant algebraic map between elliptic curves, preserving the point at infinity (informally speaking is a way to "travel" from an elliptic curve to another). In the "common" elliptic curve setting we are used to multiply a point to a scalar and land to yet another point in the same curve, In the isogeny based cryptography things are a bit different, again highly informally, you instead start from a curve and end your journey to another curve (after a serious of hops). Isogeny based cryptography started to gain popularity in the last years thanks to a celebrated paper by Jao and De Feo where they built a key exchange protocol (that goes under the name of) based on isogenies. The key fact of SIDH is that it appears to be resistant to quantum computers and his CCA version ( SIKE ) is a serious contender for the Post-Quantum Cryptography standardization . But what about VDF ? Well it turns out that we can use isogenies to build a really efficient and elegant VDF. This is what we have shown in our paper . In a nutshell we force the prover to perform a long walk between curves (the length of the walk is directly proportional to the time parameter) and we employ pairings to solve the fast verification (the pairing operation is not tight to the time parameter):

Isogenies VDF

invalidate the quantum resistance brought by isogenies to the VDF. I will cover extensively isogenies VDF in my next blog post but let's spend another couple of words about it. What advantage would it bring over the existing VDFs? The first thing that is not a real advantage but it is evident is that it employs a totally different primitive than the other 2 VDFs. Isogenies is an emergent standard tool in cryptography becoming everyday more popular that lies on top of elliptic curves and algebraic geometry. This is already a good point because it doesn't require people to embark on new study journey and makes code/tooling reusable. The other aspect to take in consideration here is the need of trusted setup. As Wesolowski/Pietrzak's with RSA group the isogenies VDF currently needs a trusted setup but this is not a game over story. Indeed if someone will be able to find an algorithm to generate random supersingular curves in a way that does not reveal their endomorphism ring (and this is not totally unlikely) the requirement of the trusted group will be lost. Last thing I will mention for now (again more details in the next blog post) is that the isogenies VDF is also a natural VRF (this is inherited by being a generalization of the BLS signature). If you want to play with isogenies VDF you can find some Sage code in https://github.com/isogenies-vdf/isogenies-vdf-sage (kudos to Simon Masson ). This brings to a BLS' s style equality. Curiously enough the use of pairing willbrought by isogenies to the VDF. I will cover

I will end up this section with a table that compares the existing VDFs that comes directly from the paper:

VDF's applications

So now that we know that a VDF constructions exists what can we do with that? Good question! Well my hope is that the answer will finally make you believe the title of this blog post was not so cheese afterall. But lets step back for last time and let's come back to our Verified Lotteries and distributed random generation. A typical solution to this problem is to have something called reveal and commit:





Reveal and Commit

In this scenario any (honest) participant to the distributed randomness will generate a random value r and will commit it to a public bullettin board. The final random value is obtained XORing all the values. It is not so hard to spot a fallacy here. Indeed the last participant, lets call her Zoe, will have a clear advantage over the other and can indeed cast a value at her own advantage, rigging the output. But at this point VDFs come to the rescue:

Reveal and Commit +VDF





public bulletin board through a VDF. Assuming the VDF time value is long enough Zoe will not have anymore the time to try to cheat. E.g. if the random beacon will output one random value every hour it is enough to set the VDF time T to 1 hour. In this way Zoe doesn't have control on the output before to commit her contribution. Well you know what? I just described part of the Proof of Work. It is well known that in order to keep all these blockchains up and running (Bitcoin & co) an incredible amount of electricity is needed: Indeed it is enough to pipe the outcome of thethrough a VDF. Assuming the VDF time value is long enough Zoe will not have anymore the time to try to cheat. E.g. if the random beacon will output one random value every hour it is enough to set the VDF timeto 1 hour. In this way Zoe doesn't have control on the output before to commit her contribution. Well you know what? I just described part of the Etherum 2.0 architecture!! Indeed you can reuse this really simple concept to try to replace one of the biggest plagues associated to the blockchains:It is well known that in order to keep all these blockchains up and running (Bitcoin & co) an incredible amount of electricity is needed:



Can we do any better? WE HAVE TO!