Skip to content
docs
Examples
Pairings or Bilinear Maps

Pairings or bilinear maps

Source material originally from Alin’s blog. Adapted by @hariria for this example.

tl;dr: Pairings, or bilinear maps, are a very powerful mathematical tool for cryptography. Pairings gave us our most succinct zero-knowledge proofs[^GGPR12e],^,[^PGHR13e],^,[^Grot16], our most efficient threshold signatures[^BLS01], our first usable identity-based encryption (IBE)[^BF01] scheme, and many other efficient cryptosystems[^KZG10]. In this post, I’ll teach you a little about the properties of pairings, their cryptographic applications and their fascinating history. In fact, by the end of this post, some of you might want to spend a year or two in jail.

⚠️

Twitter correction: The original tweet announcing this blog post stated that SNARKs would not be possible without [pairings]”, with the highlighted S meant to emphasize the “succinctness” of such SNARKs. However, thanks to several folks on Twitter, I realized this is not exactly true and depends on what one means by “succinct.” Specifically, “succinct” SNARKs, in the polylogarithmic proof size sense defined by Gentry and Wichs[^GW10], exist from a plethora of assumptions, including discrete log[^BCCplus16] or random oracles[^Mica98]. Furthermore, “succinct” SNARKs, in the sense of O(1)O(1) group elements proof size, exist from RSA assumptions too[^LM18]. What pairings do give us, currently, are SNARKs with the smallest, concrete proof sizes (i.e., in # of bytes).

Preliminaries

  • You are familiar with cyclic groups of prime order (e.g., elliptic curves)
  • Let G\idt denote the identity element of the group GT\Gr_T
  • Let xRSx \randget S denote randomly sampling an element xx from a set SS
  • Recall that g=G\langle g \rangle = \Gr denotes gg being a generator of the group G\Gr

Definition of a pairing

A pairing, also known as a bilinear map, is a function e:G1×G2GTe : \Gr_1 \times \Gr_2 \rightarrow \Gr_T between three groups G1,G2\Gr_1, \Gr_2 and GT\Gr_T of prime order pp, with generators g1=G1,g2=G2g_1 = \langle \Gr_1 \rangle, g_2 = \langle \Gr_2 \rangle and gT=GTg_T = \langle \Gr_T \rangle, respectively.

When G1=G2\Gr_1 = \Gr_2, the pairing is called symmetric. Otherwise, it is asymmetric.

Most importantly, a pairing has two useful properties for cryptography: bilinearity and non-degeneracy.

Bilinearity

Bilinearity requires that, for all uG1u\in\Gr_1, vG2v\in\Gr_2, and a,bZpa,b\in\Zp:

e(ua,vb)=e(u,v)abe(u^a, v^b) = e(u, v)^{ab}
⚠️

For cryptography purposes, this is the coolest property. For example, this is what enables useful applications like tripartite Diffie-Hellman.

Non-degeneracy

Non-degeneracy requires that:

e(g1,g2)Ge(g_1, g_2) \ne \idt

Why this property? We want non-degeneracy because, without it, it is simple (and useless) to define a (degenerate) bilinear map that, for every input, returns G\idt. Such a map would satisfy bilinearity, but would be completely useless.

Efficiency

Efficiency requires that there exists a polynomial-time algorithm in the size of a group element (i.e.; in λ=log2p\lambda = \log_2{p}) that can be used to evaluate the pairing ee on any inputs.

Why this requirement?  (Click to expand.)

It precludes trivial-but-computationally-intractable pairings.

For example, let rr be a random element in GT\Gr_T. First, the pairing is defined so that e(g1,g2)=re(g_1, g_2) = r. This way, the pairing satisfies non-degeneracy.


Second, given (u,v)G_1×G_2(u,v)\in \Gr\_1 \times \Gr\_2, an algorithm could spend exponential time O(2λ)O(2^\lambda) to compute the discrete logs x=logg1(u)x = \log_{g_1}(u) and y=logg2(v)y = \log_{g_2}{(v)} and return e(u,v)=e(g1x,g2y)=rxye(u, v) = e(g_1^x, g_2^y) = r^{xy}. This way, the pairing satisfies bilinearity because:

e(ua,vb)=e((g1x)a,(g2y)b)=e(g1(ax),g2(by))=r(ax)(by)=(rxy)ab=e(u,v)ab\begin{aligned} e(u^a, v^b) &= e\left((g_1^x)^a, (g_2^y)^b\right)\\ &= e\left(g_1^{(ax)}, g_2^{(by)}\right)\\ &= r^{(ax)\cdot (by)}\\ &= \left(r^{xy}\right)^{ab}\\ &= e(u, v)^{ab} \end{aligned}

History

⚠️

This is my limited historical understanding of pairings, mostly informed from Dan Boneh’s account in this video and from my own research into the relevant literature. Please email me if you are aware of more history and I can try to incorporate it.

A mathematician in prison

The history of (cryptographic) pairings begins with a mathematician named André Weil[^Wiki22Weil] who, during World War II, is sent to jail for refusing to serve in the French army. There, Weil, “managed to convince the liberal-minded prison director to put [him] in an individual cell where [he] was allowed to keep [..] a pen, ink, and paper.”

Weil used his newly-acquired tools to define a pairing across two elliptic curve groups. However, what was very odd at that time was that Weil put in a lot of effort to make sure his definition of a pairing is computable. And this extra effort is what made today’s pairing-based cryptography possible1.

Go to prison, not to university?

Funnily, Weil’s time in jail was so productive that he began wondering if he should spend a few months there every year. Even better, Weil contemplated if he should recommend to the relevant authorities that every mathematician spend some amount of time in jail. Weil writes:

I’m beginning to think that nothing is more conducive to the abstract sciences than prison.

[…]

My mathematics work is proceeding beyond my wildest hopes, and I am even a bit worried - if it’s only in prison that I work so well, will I have to arrange to spend two or three months locked up every year?

In the meantime, I am contemplating writing a report to the proper authorities, as follows: “To the Director of Scientific Research: Having recently been in a position to discover through personal experience the considerable advantages afforded to pure and disinterested research by a stay in the establishments of the Penitentiary System, I take the liberty of, etc. etc.”

You can read all of this and more in his fascinating autobiography, written from his perspective as a mathematician[^Weil92].

From breaking cryptography to building cryptography

Weil’s work was the foundation. But three more developments were needed for pairing-based cryptography to rise.

First development: Miller’s algorithm

In 1985, Victor Miller writes up a manuscript showing that Weil’s pairing, which actually involves evaluating a polynomial of exponential degree, can in fact be computed efficiently in polynomial time[^Mill86Short].

In December 1984, Miller gave a talk at IBM about elliptic curve cryptography where he claimed that elliptic curve discrete logs were more difficult to compute than ordinary discrete logs over finite fields2. Miller was challenged by Manuel Blum, who was in the audience, to back up this claim by giving a reduction: i.e., showing that an algorithm BB for solving discrete log on elliptic curves can be efficiently turned into another algorithm AA for solving discrete logs in finite fields. Such a reduction would imply the problem solved by BB (i.e., computing elliptic curve discrete logs) is at least as hard, if not harder, than AA‘s problem (i.e., computing finite field discrete logs).

Miller set out to find a reduction by thinking about the only thing that related an elliptic curve group and a finite field: the Weil pairing. Funnily, what he quickly realized was that, although the Weil pairing gives a reduction, it’s in the opposite direction: i.e., it turned out an algorithm AA for discrete log in finite fields could be efficiently turned into an algorithm BB for discrete logs in elliptic curves with the help of the Weil pairing. This “unwanted” reduction is easy to see. Since e(ga,g)=e(g,g)ae(g^a, g) = e(g,g)^a, solving discrete log on the elliptic curve element gaGg_a\in \Gr is just a matter of solving it on e(g,g)aGTe(g,g)^a\in \Gr_T, which is actually a multiplicative subgroup of a finite field (see the inner details of pairings).

This almost showed the opposite of what Miller sought to prove, potentially destroying elliptic curve cryptography, but fortunately the degree of the extension field that the Weil pairing mapped into was too large, making this “unwanted” reduction inefficient and thus not a reduction after all.

This whole affair got Miller interested in seeing if the Weil pairing could be computed efficiently, which led to the discovery of his algorithm. Interestingly, he submitted this manuscript to FOCS, a top theoretical computer science conference, but the paper got rejected and would not be published until much later in the Journal of Cryptology (according to Miller)3.

Second development: MOV attack

In 1991, Menezes, Vanstone and Okamoto[^MVO91] leverage Miller’s efficient algorithm for evaluating the Weil pairing to break the discrete logarithm assumption on certain elliptic curves in sub-exponential time. This was quite amazing since, at that time, no sub-exponential time algorithms were known for elliptic curves.

Their attack, called the MOV attack, mapped an elliptic curve discrete logarithm challenge gaGg^a\in \Gr into a target group as e(ga,g)=e(g,g)aGTe(g^a, g)=e(g,g)^a \in \Gr_T using the pairing. Since the target group was a subgroup of a finite field Fqk\mathbb{F}_q^{k}, this allowed the use of faster, sub-exponential time algorithms for computing the discrete log on e(g,g)ae(g,g)^a.

Third development: Joux’s tripartite Diffie-Hellman

So far, pairings only seemed useful for cryptanalysis. No one knew how to use them for building (instead of breaking) cryptography.

This changed in 2000, when Joux[^Joux00] used pairings to implement a 1-round key-exchange protocol between three parties, or tripartite Diffie-Hellman. Previously, such 1-round protocols were only known between two parties while three parties required 2 rounds.

From there, an abundance of new, efficient cryptography started pouring over:

  • BLS (short) signatures[^BLS01]
  • identity-based encryption[^BF01]
  • additively-homomorphic encryption with support for one multiplication[^BGN05]
  • succinct zero-knowledge proofs[^GGPR12e]

An interesting pattern to notice here is how pairings evolved from a cryptanalytic tool used to break cryptosystems, to a constructive tool used to build cryptosystems. Interestingly, the same pattern also arose in the development of lattice-based cryptography.

Arithmetic tricks with pairings

There are a few tricks cryptographers often use when dealing with pairings in their proofs of correctness or security of a cryptosystem.

The most obvious trick, “multiplying in the exponent”, comes from the bilinearity property.

e(ua,vb)=e(u,v)ab\begin{aligned} e(u^a, v^b) = e(u, v)^{ab} \end{aligned}

Bilinearity also implies the following trick:

e(u,vb)=e(u,v)b\begin{aligned} e(u, v^b) = e(u, v)^b \end{aligned}

Or, alternatively:

e(ua,v)=e(u,v)a\begin{aligned} e(u^a, v) = e(u, v)^a \end{aligned}

Another trick, which is just an analogous way of defining bilinearity, is:

e(u,vw)=e(u,v)e(u,w)\begin{aligned} e(u, v\cdot w) = e(u, v)\cdot e(u, w) \end{aligned}

Why does this work? Let y,zy,z denote the discrete logs (w.r.t. g2g_2) of vv and ww, respectively. Then, we have:

e(u,vw)=e(u,g2yg2z)=e(u,g2y+z)=e(u,g2)y+z=e(u,g2)ye(u,g2)z=e(u,g2y)e(u,g2z)=e(u,v)e(u,w)\begin{aligned} e(u, v\cdot w) &= e(u, g_2^y \cdot g_2^z)\\ &= e(u, g_2^{y + z})\\ &= e(u, g_2)^{y + z}\\ &= e(u, g_2)^y \cdot e(u, g_2)^z\\ &= e(u, g_2^y) \cdot e(u, g_2^z)\\ &= e(u, v)\cdot e(u, w) \end{aligned}

Or, alternatively:

e(u,v/w)=e(u,v)e(u,w)\begin{aligned} e(u, v / w) = \frac{e(u, v)}{e(u, w)} \end{aligned}

Applications of pairings

Tripartite Diffie-Hellman

We have three parties Alice, Bob and Charles with secret keys a,ba, b, and cc (respectively). They send each other their public keys ga,gb,gcg^a, g^b, g^c and agree on a shared secret key k=e(g,g)abck = e(g, g)^{abc}.4

How?

Consider Alice’s point of view. She gets gbg^b and gcg^c from Bob and Charles. First, she can use her secret aa to compute gabg^{ab}. Second, she can use the pairing to compute e(gab,gc)=e(g,g)abc=ke(g^{ab}, g^c) = e(g, g)^{abc} = k.

By symmetry, all other players can do the same and agree on the same kk.

ℹ️

The protocol can be generalized to asymmetric pairings too, where G1G2\Gr_1 \neq \Gr_2.

BLS signatures

Boneh, Lynn and Shacham give a very short signature scheme from pairings[^BLS01], which works as follows:

  • Assume G_2=g2\Gr\_2 = \langle g_2 \rangle and that there exists a hash function H:{0,1}G1H : \{0,1\}^* \rightarrow \Gr_1 modeled as a random oracle.
  • The secret key is sZps \in \Zp while the public key is pk=g_2sG_2\pk = g\_2^s \in \Gr\_2.
  • To sign a message mm, the signer computes σ=H(m)sG_1\sigma = H(m)^s\in \Gr\_1.

Notice that correctly-computed signatures will always verify since:

e(σ,g2)=?e(H(m),pk) e(H(m)s,g2)=?e(H(m),g2s) e(H(m),g2)s=?e(H(m),g2)s e(H(m),g2)=e(H(m),g2)\begin{aligned} e(\sigma, g_2) \stackrel{?}{=} e(H(m), \pk) \Leftrightarrow\\\\\ e(H(m)^s, g_2) \stackrel{?}{=} e(H(m), g_2^s) \Leftrightarrow\\\\\ e(H(m), g_2)^s \stackrel{?}{=} e(H(m), g_2)^s \Leftrightarrow\\\\\ e(H(m), g_2) = e(H(m), g_2) \end{aligned}

See the BLS paper[^BLS01] for how to prove that no attacker can forge BLS signatures given access to pk\pk and a signing oracle.

Cool properties of BLS signatures

BLS signatures are quite amazing:

  1. They are one of the simplest schemes to implement, given access to an elliptic-curve library.
  2. One can aggregate many signatures from different public keys on the same message mm into a single multi-signature that continues to verify using just 2 pairings.
  3. One can even aggregate such signatures across different messages into a single aggregate signature.
    • However, such aggregate signatures take n+1n+1 pairings to verify.
  4. One can easily and efficiently[^TCZplus20] build a threshold BLS signature scheme, where any subset of t\ge t out of nn signers can collaborate to sign a message mm but no less than tt can ever produce a valid signature.
    • Even better, BLS threshold signatures are deterministic and give rise to threshold verifiable random functions (VRFs) which are useful for generating randomness on chain.
  5. One can define very-efficient blind-variants of BLS signatures, where the signer can sign a message mm without learning the message mm.
  6. BLS signatures are very efficient in practice.
    • As far as I remember, they are the most efficient scheme for (1) multi-signatures, (2) aggregate signatures and (3) threshold signatures
    • For single-signer BLS, they are slower than Schnorr signatures over non-pairing-friendly curves
⚠️

If you find yourself confused between the various notions of multi-signatures, aggregate signatures and threshold signatures, see my slides.

Identity-based encryption (IBE)

In an IBE scheme, one can encrypt directly to a user-friendly email address (or a phone number), instead of a cumbersome public key which is difficult to remember or type-in correctly.

Boneh and Franklin give a very efficient IBE scheme from pairings[^BF01].

For IBE to work, a trusted third-party (TTP) called a private key generate (PKG) must be introduced, who will issue secret keys to users based on their email addresses. This PKG has a master secret key (MSK) mskZp\msk \in \Zp with an associated master public key (MPK) mpk=g2s\mpk = g_2^s, where g2=G2\langle g_2 \rangle = \Gr_2.

The mpk\mpk is made public and can be used to encrypt a message to any user given their email address. Crucially, the PKG must keep the msk\msk secret. Otherwise, an attacker who steals it can derive any user’s secret key and decrypt everyone’s messages.

⚠️

As you can tell the PKG is a central point of failure: theft of the msk\msk compromises everyone’s secrecy. To mitigate against this, the PKG can be decentralized into multiple authorities such that a threshold number of authorities must be compromised in order to steal the msk\msk.

Let H1:{0,1}G1 and HT:GT{0,1}nH_1 : \{0,1\}^* \rightarrow \Gr_1^* \text{ and } H_T : \Gr_T \rightarrow \{0,1\}^n be two hash functions modeled as random oracles. To encrypt an nn-bit message mm to a user with email address idid, one computes:

gid=e(H1(id),mpk)GTrRZpc=(g2r,mHT((gid)r))G2×{0,1}n\begin{aligned} g_{id} &= e(H_1(id), \mpk) \in \Gr_T\\ r &\randget \Zp\\ c &= \left(g_2^r, m \xor H_T\left(\left(g_{id}\right)^r\right)\right) \in \Gr_2\times \left\{0,1\right\}^n \end{aligned}

To decrypt, the user with email address idid must first obtain their decryption secret key dskid\dsk_{id} from the PKG. For this, we assume the PKG has a way of authenticating the user, before handing them their secret key. For example this could be done via email.

The PKG computes the user’s decryption secret key as:

dskid=H1(id)sG1\begin{aligned} \dsk_{id} = H_1(id)^s \in \Gr_1 \end{aligned} m=vHT(e(dskid,u))\begin{aligned} m &= v \xor H_T(e(\dsk_{id}, u)) \end{aligned}

You can see why correctly-encrypted ciphertexts will decrypt successfully, since:

vHT(e(dskid,u))=(mHT((gid)r))HT(e(H1(id)s,g2r)) =(mHT(e(H1(id),mpk)r))HT(e(H1(id),g2)rs) =m(HT(e(H1(id),g2s)r)HT(e(H1(id),g2)rs)) =m(HT(e(H1(id),g2)rs)HT(e(H1(id),g2)rs)) =m\begin{aligned} v \xor H_T(e(\dsk_{id}, u)) &= \left(m \xor H_T\left((g_{id})^r \right)\right) \xor H_T\left(e(H_1(id)^s, g_2^r) \right)\\\\\ &= \left(m \xor H_T\left(e(H_1(id), \mpk )^r \right)\right) \xor H_T\left(e(H_1(id), g_2 )^{rs}\right)\\\\\ &= m \xor \left(H_T\left(e(H_1(id), g_2^s)^r \right) \xor H_T\left(e(H_1(id), g_2 )^{rs}\right)\right)\\\\\ &= m \xor \left(H_T\left(e(H_1(id), g_2 )^{rs}\right) \xor H_T\left(e(H_1(id), g_2 )^{rs}\right)\right)\\\\\ &= m \end{aligned}

To see why this scheme is secure under chosen-plaintext attacks, refer to the original paper[^BF01].

How do pairings actually work?

Mostly, I have no idea. How come? Well, I never really needed to know. And that’s the beauty of pairings: one can use them in a black-box fashion, with zero awareness of their internals.

Still, let’s take a small peek inside this black box. Let us consider the popular pairing-friendly BLS12-381 curve[^Edgi22], from the family of BLS curves characterized by Barreto, Lynn and Scott[^BLS02e].

⚠️

Public service announcement: Some of you might’ve heard about Boneh-Lynn-Shacham (BLS) signatures. Please know that this is a different BLS than the BLS in Barretto-Lynn-Scott curves. Confusingly, both acronyms do share one author, Ben Lynn. (In case this was not confusing enough, wait until you have to work with BLS signatures over BLS12-381 curves.)

For BLS12-381, the three groups G1,G2,GT\Gr_1, \Gr_2, \Gr_T involved are:

  • The group G1\Gr_1 is a subgroup of an elliptic curve E(Fq)={(x,y)(Fq)2y2=x3+4}E(\F_q) = \left\{(x, y) \in (\F_q)^2 \mid y^2 = x^3 + 4 \right\} where G1=p\vert\Gr_1\vert = p
  • The group G2\Gr_2 is a subgroup of a different elliptic curve E(Fq2)={(x,y)(Fq2)2y2=x3+4(1+i)}E'(\F_{q^2}) = \left\{(x, y) \in (\F_{q^2})^2 \mid y^2 = x^3 + 4(1+i) \right\} where ii is the square root of 1-1 and G2=p\vert\Gr_2\vert = p.
  • The group GT\Gr_T are all the ppth roots of unity in Fqk\F_{q^{k}}, where k=12k=12 is called the embedding degree

How does the pairing map across these three groups work? Well, the pairing e(,)e(\cdot,\cdot) expands to something like:

e(u,v)=fp,u(v)(qk1)/p\begin{aligned} e(u, v) = f_{p, u}(v)^{(q^k - 1)/p} \end{aligned}

It’s useful to know that computing a pairing consists of two steps:

  1. Evaluating the base fp,u(v)f_{p, u}(v), also known as a Miller loop, in honor of Victor Miller’s work
  2. Raising this base to the constant exponent (qk1)/p(q^k - 1)/p, also known as a final exponentiation.
    • This step is a few times more expensive than the first

For more on the internals, see other resources[^Cost12],^,[^GPS08],^,[^Mene05].

Implementing pairing-based crypto

This section discusses various implementation-level details that practitioners can leverage to speed up their implementations.

Use asymmetric pairings!

The pairing over BLS12-381 is asymmetric: i.e., G1G2\Gr_1 \ne \Gr_2 are two different groups (of the same order pp). However, there also exist symmetric pairings where G1=G2\Gr_1 = \Gr_2 are the same group.

Unfortunately, “such symmetric pairings only exist on supersingular curves, which places a heavy restriction on either or both of the underlying efficiency and security of the protocol”[^BCMplus15e]. In other words, such supersingular curves are not as efficient at the same security level as the curves used in asymmetric pairings.

Therefore, practitioners today, as far as I am aware, exclusively rely on asymmetric pairings due to their higher efficiency when the security level is kept the same.

BLS12-381 performance

I will give a few key performance numbers for the BLS12-381 curve implemented in Filecoin’s (blstrs Rust wrapper around the popular blst library. These microbenchmarks were run on a 10-core 2021 Apple M1 Max using cargo bench.

Pairing computation times

Terminal
alinush@MacBook [~/repos/blstrs] (master %) $ cargo +nightly bench -- pairing_
running 4 tests
test bls12_381::bench_pairing_final_exponentiation     ... bench:     276,809 ns/iter (+/- 1,911)
test bls12_381::bench_pairing_full                     ... bench:     484,718 ns/iter (+/- 2,510)
test bls12_381::bench_pairing_g2_preparation           ... bench:      62,395 ns/iter (+/- 4,161)
test bls12_381::bench_pairing_miller_loop              ... bench:     148,534 ns/iter (+/- 1,203)
  • a Miller loop computation
    • 210 microseconds
  • a final exponentiation
    • 276 microseconds

Therefore, a pairing takes around 486 microseconds (i.e., the sum of the two).

Exponentiation times

⚠️

The GT\Gr_T microbenchmarks were done by slightly-modifying the blstrs benchmarking code here. (See the HTML comments of this page for those modifications.)

  • G1\Gr_1 exponentiations are the fastest
    • 72 microseconds
  • G2\Gr_2 exponentiations are around 2×\times slower
    • 136 microseconds
  • GT\Gr_T exponentiations are around 3.5×\times slower than in G2\Gr_2
    • 500 microseconds

Note: These benchmarks pick the exponentiated base randomly and do not perform any precomputation on it, which would speed up these times by 2×2\times-4×4\times.

Multi-exponentiations

This is a well-known optimization that I’m including for completeness.

Specifically, many libraries allow you to compute a product 0<i<k(gi)xi\prod_{0 < i < k} \left(g_i\right)^{x_i} of kk exponentiations much faster than individually computing the kk exponentiations and aggregating their product.

For example, blstrs seems to be incredibly fast in this regard:

Terminal
running 4 tests
test bls12_381::bench_g1_multi_exp                     ... bench:     760,554 ns/iter (+/- 47,355)
test bls12_381::bench_g1_multi_exp_naive               ... bench:  18,575,716 ns/iter (+/- 42,688)
test bls12_381::bench_g2_multi_exp                     ... bench:   1,876,416 ns/iter (+/- 58,743)
test bls12_381::bench_g2_multi_exp_naive               ... bench:  35,272,720 ns/iter (+/- 266,279)
  • a size-256 multi-exponentiation in G1\Gr_1
    • takes 760 microseconds in total, or 3 microseconds per exponentiation!
    • done naively, it would take 18.5 milliseconds in total, which is 24×24\times longer
  • a size-256 multi-exponentiation in G2\Gr_2
    • takes 1.88 milliseconds in total, or 7.33 microseconds per exponentiation!
    • done naively, it would take 35.3 milliseconds, which is 18.8×18.8\times longer

Group element sizes

  • G1\Gr_1 group elements are the smallest
    • e.g., 48 bytes for BLS12-381 or 32 bytes for BN254 curves[^BN06Pair]
  • G2\Gr_2 group elements are 2×\times larger
    • e.g., 96 bytes on BLS12-381
  • GT\Gr_T elements are 12×\times larger
    • In general, for a pairing-friendly curve with embedding degree kk, they are kk times larger

Other operations

  • G1\Gr_1 multiplications (recall we are using multiplicative notation for groups, not additive notation)
    • Normal: 565 nanoseconds (when both points are in projective (X,Y)(X, Y) coordinates)
    • Mixed: 438 nanoseconds (when first point is in projective coordinates, second is in affine (X,Y,Z)(X, Y, Z) coordinates)
      • Faster, because saves one projective-to-affine conversion
  • G2\Gr_2 multiplications
    • Normal: 1,484 nanoseconds
    • Mixed: 1,095 nanoseconds
  • Hashing to G1\Gr_1 takes around 50 microseconds (not accounting for the extra time required to hash down larger messages using SHA2-256)

Switching between G1\Gr_1 and G2\Gr_2

When designing a pairing-based cryptographic protocol, you will want to carefully pick what to use G1\Gr_1 and what to use G2\Gr_2 for.

For example, in BLS signatures, if you want small signatures, then you would compute the signature σ=H(m)sG1\sigma = H(m)^s \in \Gr_1 and settle for a slightly-larger public key be in G2\Gr_2. On the other hand, if you wanted to minimize public key size, then you would let it be in G1\Gr_1 while taking slightly longer to compute the signature in G2\Gr_2.

⚠️

Other things will also influence how you use G1\Gr_1 and G2\Gr_2, such as the existence of an isomorphism ϕ:G2G1\phi : \Gr_2 \rightarrow \Gr_1 or the ability to hash uniformly into these groups. In fact, the existence of such an isomorphism separates between two types of asymmetric pairings: Type 2 and Type 3 (see Galbraith et al.[^GPS08] for more information on the different types of pairings)

Comparison to non-pairing-friendly elliptic curves

When compared to an elliptic curve that does not admit pairings, pairing-friendly elliptic curves are around two times slower.

For example, the popular prime-order elliptic curve group Ristretto255 offers:

Terminal
ristretto255/basepoint_mul
                        time:   [10.259 µs 10.263 µs 10.267 µs]
 
ristretto255/point_mul  time:   [40.163 µs 40.187 µs 40.212 µs]
  • 2×\approx 2\times faster exponentiations of 40 microseconds
    • which can be sped up to 10 microseconds, using precomputation when the exponentiation base is fixed
  • 32 byte group element sizes

Multi-pairings

Whenever, we have to compute the product of nn pairings, we can first compute the nn Miller loops and do a single final exponentiation instead of nn. This drastically reduces the pairing computation time in many applications.

ie(ui,vi)=i(fp,ui(vi)(qk1)/p) =(ifp,ui(vi))(qk1)/p\begin{aligned} \prod_i e(u_i, v_i) &= \prod_i \left(f_{p, u_i}(v_i)^{(q^k - 1)/p}\right)\\\\\ &= \left(\prod_i f_{p, u_i}(v_i)\right)^{(q^k - 1)/p} \end{aligned}

Conclusion

This blog post was supposed to be just a short summary of the three properties of pairings: bilinearity, non-degeneracy and efficiency.

Unfortunately, I felt compelled to discuss their fascinating history. And I couldn’t let you walk away without seeing a few powerful cryptographic applications of pairings.

After that, I realized practitioners who implement pairing-based cryptosystems might benefit from knowing a little about their internal workings, since some of these details can be leveraged to speed up implementations.

Acknowledgements

I would like to thank Dan Boneh for helping me clarify and contextualize the history around Weil, as well as for his 2015 Simons talk, which inspired me to do a little more research and write this historical account.

Big thanks to:

  • Lúcás Meier, Pratyush Mishra, Ariel Gabizon and Dario Fiore for their enlightening points on what “succinct” (S) stands for in SNARKs[^GW10] and for reminding me that SNARKs with O(1)O(1) group elements proof size exist from RSA assumptions[^LM18].
  • Sergey Vasilyev for pointing out typos in the BLS12-381 elliptic curve definitions.
  • @BlakeMScurr for pointing out an incorrect reference to Joux’s work[^Joux00].
  • Conrado Guovea for pointing me to Victor Miller’s account of how he developed his algorithm for evaluating the Weil pairing (discussed here).
  • Chris Peikert for pointing out that there are plenty-fast IBE schemes out there that do not rely on pairings[^DLP14e].

PS: Twitter threads are a pain to search through, so if I missed acknowledging your contribution, please kindly let me know.


Footnotes

  1. Thanks to Dan Boneh, who contrasted Weil’s definition with a different one by Shimura from his classic book on modular forms. While Shimura’s definition makes it much easier to prove all the properties of the pairing, it defines a pairing of order nn as a sum of nn points of order n2n^2. This makes it hopelessly non-computable. Weil’s definition, on the other hand, involves an evaluation of a very concrete function — there are no exponential-sized sums — but requires much more work to prove all its pairing properties.

  2. Miller tells this story himself in a talk he gave at Microsoft Research on October 10th, 2010.

  3. I am unable to find any trace of Miller’s published work on this beyond the manuscript Boneh published in[^Mill86Short]. Any pointers would be appreciated.

  4. Typically, there will be some key-derivation function KDF\mathsf{KDF} used to derive the key as k=KDF(e(g,g)abc)k = \mathsf{KDF}(e(g,g)^{abc}).