Our paper Large Modulus Ring-LWE ≥ Module-LWE — together with Amit Deo — was accepted at AsiaCrypt 2017. Here’s the abstract:
We present a reduction from the module learning with errors problem (MLWE) in dimension
and with modulus
to the ring learning with errors problem (RLWE) with modulus
. Our reduction increases the LWE error rate
by a quadratic factor in the ring dimension
and a square root in the module rank
for power-of-two cyclotomics. Since, on the other hand, MLWE is at least as hard as RLWE, we conclude that the two problems are polynomial-time equivalent. As a corollary, we obtain that the RLWE instance described above is equivalent to solving lattice problems on module lattices. We also present a self reduction for RLWE in power-of-two cyclotomic rings that halves the dimension and squares the modulus while increasing the error rate by a similar factor as our MLWE to RLWE reduction. Our results suggest that when discussing hardness to drop the RLWE/MLWE distinction in favour of distinguishing problems by the module rank required to solve them.
Our reduction is an application of the main result from Classical Hardness of Learning with Errors in the context of MLWE. In its simplest form, that reduction proceeds from the observation that for with
small it holds that
Thus, if there exists an efficient algorithm solving the problem in , we can use it to solve the problem in
.
In our paper, we essentially show that we can replace integers mod resp.
with the ring of integers
of a Cyclotomic field (considered mod
resp.
). That is, we get the analogous reduction from
(MLWE) to
(RLWE). The bulk of our paper is concerned with making sure that the resulting error distribution is sound. This part differs from the Classical Hardness paper since our target distribution is in
rather than
.
Interpretation
Since the Module-LWE problem has been suggested (e.g. here) to hedge against algebraic attacks on Ring-LWE, a natural question is how to interpret this reduction. One might be tempted to conclude that the suggestion is simply wrong. That is, looked at in abstraction from the size of the modulus (for a constant error rate ), Module-LWE seems no more secure than Ring-LWE and thus a poor hedging strategy.
However, this abstraction from the size of the modulus is misleading (hat tip to our anonymous referees and Léo Ducas). If there exists an efficient algorithm that solves Ring-LWE for a fixed error rate but any modulus, then this algorithm can also be used solve any LWE instance. This was already remarked in the Classical Hardness paper and we give a sketch in Appendix B of our paper: convert LWE to a 1-dimensional instance with exponential modulus and then construct an RLWE instance from this 1-dimensional instance. Such an adversary might exist, i.e. we cannot in principle rule out that LWE might be easy, but it will not be an adversary exploiting the special algebraic structure of Ring-LWE since LWE has none of that.
Furthermore, consider the dual attack on plain LWE using a set of samples . In order to perform this attack, a short vector
in the
-dimensional dual lattice to the lattice formed by the
must be found. This dual lattice has volume
whp. Using the Gaussian heuristic, the shortest vector in such a lattice is expected to have length
. The attack proceeds by noticing that the inner-product
should be small in the case of LWE samples and uniform otherwise. In particular, the smaller
, the more certain we are that the samples were in fact from an LWE distribution. Concretely, for a fixed error rate
, we have
in the case where the modulus is
. On the other hand, if the modulus is
, we expect
which is larger for fixed
. Therefore, the performance of the dual attack diminishes for growing
. Indeed, our output RLWE instance in modulus
has noise of size at least
. Thus, our RLWE output instances cannot be solved by finding short vectors in lattices of module rank 2 (
in the above notation) using standard dual attacks in contrast to typical RLWE instances used in the literature. In other words: “Our results suggest that when discussing hardness to drop the RLWE/MLWE distinction in favour of distinguishing problems by the module rank required to solve them.”
Public-Key Encryption from Large Modulus RLWE
As alluded to above, large modulus RLWE instances are not very popular in cryptographic constructions. A reason for this is that they tend not to work very well in those constructions. Recall, for example, the simple public-key encryption scheme from the original RLWE paper which serves as the blueprint for many subsequent constructions. The scheme publishes a public-key , where both
and
are small elements from the ring of integers of a power-of-two Cyclotomic field. Encryption of some polynomial
with
coefficients is then performed by sampling short
and outputting:
The decryption algorithm computes
Let be the norm of
. Clearly, the final message will have noise of norm
. Thus, to ensure correct decryption,
has a quadratic dependency on
. As a consequence, in this construction, increasing
and
can only reduce security by increasing the gap between noise and modulus.
However, this issue can be avoided (and is avoided when basing the construction on the MLWE assumption) by picking some at the cost of publishing more samples in the public key. For example, if
the public key becomes
where have norm
. Encryption of some
polynomial
is then performed by sampling short
with norm
and outputting
The decryption algorithm computes
The security of the public key reduces to the hardness of RLWE in dimension with modulus
and noise size
as before. The security of encryptions reduces to the hardness of MLWE in dimension
over ring dimension
, modulus
and noise size
, i.e. the level of security is maintained for
by increasing the dimension. While we still require
, the size of
can be reduced at the cost of increasing
.
Finally, we may think of Regev’s original encryption scheme as one extreme corner of this design space (for LWE) with , where
are binary and where
. That is, in the construction above, we can replace the Module-LWE assumption by the leftover hash lemma if
is sufficiently big. Of course, this comes at the cost of a significant increase in the size of the public key. On the other hand, we would now only require
.