I am currently attending ESC 2013 in Mondorf, Luxembourg. Over dinner someone mentioned that there is no known reduction from LPN to lattice reduction, i.e., it is not known that you can solve LPN with LLL and friends. This seems rather strange to me, because the standard lattice attack on LWE seems to be carrying over as is:

sage: n = 100 # number of variables
sage: m = 400 # number of samples
sage: A = random_matrix(GF(2), m, n)
sage: s = random_vector(GF(2), n) # our secret
sage: p = 0.25 # our error rate
sage: v = A*s + vector(GF(2),[1 if random() < p else 0 for _ inrange(m)])
# we are searching for a short vector in the dual lattice
sage: B = A.kernel().matrix()
sage: L = B.change_ring(ZZ).LLL()
# because a short vector there, means few additions which means a higher bias in the sum
sage: Av = A.augment(v)
sage: sum(map(lambda x: abs(x) % 2,L[0])), (L[0]*Av)[-1]

Of course, this means running lattice reduction many times, but still: what am I missing?

If you search for the SVP in your lattice under L2 norm, you have no guarantee to find the element closest to 0 with the Hamming distance, after being reduced mod 2…
It is possible to add vectors 0…,0,2,…0 to reduce mod 2 but then they would be the SVP.
If we increase the modulus, it works, but then the problem is LWE and the algorithm is well-known.

Lattice reduction performs badly in Z/2Z…

Not knowing a reduction and lattice reduction not being the most efficient technique are not the same thing, though.

If you search for the SVP in your lattice under L2 norm, you have no guarantee to find the element closest to 0 with the Hamming distance, after being reduced mod 2…

It is possible to add vectors 0…,0,2,…0 to reduce mod 2 but then they would be the SVP.

If we increase the modulus, it works, but then the problem is LWE and the algorithm is well-known.