GLV Decomposition
Splitting a scalar multiplication into 2 and reducing count of double operations
Note 1. The elliptic curve in use should have efficiently-computable endomorphisms
Note 2. The elliptic curve in use must have the equation y2=x3+b, where a=0
Algorithm Process
We attempt to calculate the following elliptic curve point-scalar multiplication:
...where k is the scalar and point P is (x,y).
1. Find a scalar λ
...that satisfies the following:
β will be a cubic root of the base field or in other words for a base field Fp we have β3≡1modp
As β is a 3rd root of unity, the elliptic curve equation (with a=0) holds:
TODO: Explain how to find λ
Note that the property of endomorphism is used for this process.
2. Decompose the scalar k
...where k1 and k2 are around half the bits of k.
3. Calculate the total
This means, by definition, the following is true:
Comparison of Cost
In the GLV paper, they mention their algorithm reduces the number of double operations by around half in comparison to other algorithms due to the decomposed scalar. However, determining the exact cost benefit of GLV proves quite difficult. The authors of GLV state the following in their analysis section (which has been paraphrased for simplicity):
Whereas a current "highly efficient algorithm" that uses exponent recoding and a sliding window algorithm costs 157 point doubles and 24 point additions for 160-bit scalar multiplication, with GLV, it costs around 79 point doubles and 38 point additions. If a point double costs 8 field multiplications and a point addition costs 11 field multiplications (per Jacobian point form), then GLV has a run time of around 66% the "highly efficient algorithm." With an increase in bit length, the comparative run time percentage gets better. For example, 512-bit scalar multiplication reduces with percentage to 62%.
By this description, we can conclude that GLV severely decreases the total number of point doubles, but can slightly increase the number of point additions.
GLV Advantages
Parallelization
The separation of one scalar multiplication into smaller parts introduces the possibility for running parallelization.
Reduced overhead from shared memory
Memory for intermediate computations can be shared by first calculating k1P and k2P. After calculations are finished, β can be simply multiplied to k2P's result.
MSM
GLV is essential for optimization of Multi-scalar multiplication (MSM) as it can be used to decompose MSM multiplications before running separate MSM's on the smaller parts. Moreover, when utilizing a bucket optimization technique (like Pippenger's) for MSM, if GLV is used before Signed Bucket Indexes, the number of buckets required per window is reduced to 41 of the original number.
References
Written by Ashley Jeong and Ryan Kim of Fractalyze
Last updated