Monday 23 June 2014

Week 5

The previous week was quite interesting. I wrote the wrapper function fmpz_lll_wrapper and documented it. Test code for it was also added. Thus, the first milestone of the project was reached. Improvements to the module implemented thus far were made according to mentor comments. In particular, is_reduced functions and fmpz_mat_lll were improved to avoid storing GS vectors as suggested by Curtis on the mailing list. The function fmpz_lll_wrapper should now provide functionality identical to fpLLL and flint-1.6, but an additional feature in this version is that the Gram matrix can also be supplied for reduction instead of the lattice basis. It should be useful because the only other software which I've seen that allows passing a Gram matrix as input is Magma, which isn't open source.

The next step is LLL with removals. There seem to be 2 definitions for LLL_with_removal in literature: Both versions accept a basis (say B) and a bound (say N). Now, the intuitive definition seems to be that B is LLL reduced with bound N if for every vector b in B, norm(b) <= N. However, according to these papers by Mark van Hoeij, et al., it actually deals with the length of the Gram-Schmidt (GS) vectors i.e. the last vector is removed if its GS length is greater than N. Of course, for computational simplicity (and accuracy) I accept the squared norm bounding the GS length. Another point to be observed is that in flint-1.6, the bound is compared with half the squared GS length, probably because a fp approximation of the GS lengths is used. This is done to avoid removing something valuable. Also, the documentation is a bit ambiguous as it mentions that the bound is for the "target vectors". I'm going with van Hoeij's definition because he mentions it in the context of factoring polynomials which applies to a possible use of LLL in FLINT.
I am not sure if LLL with removals requires a version for the Gram matrix as input, as the only mentions of it in literature relate to factoring polynomials which input a lattice basis to the procedure. So, I haven't written a Gram matrix variant for now. Although, I may implement it later, if it's good to have.
My version of LLL_with_removal works even when I directly compare the bound with the squared GS norm because I avoid conversion to doubles and instead compare fmpz_t's. This was ascertained from the test code for fmpz_lll_d_with_removal. Documentation for fmpz_lll_d_with_removal and fmpz_mat_is_reduced_with_removal in the corresponding modules was added.

This week, I plan to write the fmpz_lll_d_heuristic_with_removal function, along with its test code and documentation. If things are smooth, maybe I can even work on the arbitrary precision variant.

Monday 16 June 2014

Week 4

This last week has been a productive one. I implemented check_babai_heuristic (the Babai version using mpfs) and documented it.  Helper functions for this were also implemented in the fmpz and fmpz_vec modules. The fmpz_lll_mpf2 function was also written. The "2" in the name of the function signifies that it takes the precision to be used for storing the temporary variables and the GSO data (also the approximate Gram matrix if fl->rt == APPROX) as arguments, or in other words mpf_init2 is used for initialising any intermediate mpf_t's used. The wrapper for this is fmpz_lll_mpf which increases the precision until the LLL is successful or God forbid, the precision maxes out. Test code for lll_mpf was added. Also, the test code now uses the fmpz_mat versions of is_reduced and is_reduced_gram which use exact arithmetic to check if a basis is LLL reduced and help cover edge cases.

Also, there's some good news to report. The lll_d and lll_d_heuristic functions now work on all test cases without failure for Z_BASIS as input matrix! With GRAM, they fail sometimes due to inadequate precision of double to store large values. I can confirm that fmpz_lll_d works for the 35 dimensional lattice on the web page for the L^2 paper by Nguyen and Stehle (tested against fplll-4.0.4). I also tested fmpz_lll_mpf with the 55-dimensional lattice which makes NTL's LLL_FP (with delta=0.99) loop forever. It works! :) The output produced is the same as that if the matrix is passed to fpLLL with the "-m proved" option.

This week, I look forward to writing the LLL wrapper fmpz_lll_wrapper and documenting it besides improving the part of the module implemented so far and fixing bugs, if any. I also plan to document those functions which were left undocumented and shift fmpq_mat_lll to the fmpz_mat module and rename it to avoid confusion.

Monday 9 June 2014

Week 3

Henceforth, starts another exciting week of coding. This week I look forward to implementing arbitrary precision floating point LLL using the mpf_t data type provided by GMP (which is dependency for installing FLINT). According to the discussion on the mailing list, as it isn't a big issue, I've decided to get the code working with mpf for now as some helper functions are already implemented in the mpf_vec and mpf_mat modules (The other option is Arb which will be used eventually).

Last week, I wrote the heuristic version of LLL (which always uses heuristic dot products that attempt to detect cancellation) over doubles and worked on improving the textbook L^2 and Gram matrix variants. I think the textbook L^2 implementation needs more work before it can be proclaimed ready for use. Though it works on most of the test cases, there are still some corner cases to consider. For use in fmpz_lll_check_babai_heuristic, some changes were also made to the mpf_vec, fmpz, fmpz_vec modules to help facilitate conversion between fmpz and mpf and vice versa. Also, _mpf_vec_dot2 now signals possible cancellation through an integer return value.

Monday 2 June 2014

Week 2

Two weeks have passed since GSoC started and it has been an eventful journey so far. I've implemented the fmpz_lll_d function which performs LLL over doubles on a basis matrix with integer (fmpz_t) entries using either an approximate Gram matrix (having floating point entries) or an exact Gram matrix (with multiprecision integer entries). An LLL context object has been created for supplying LLL parameters and other options to the function based on Bill sir's suggestion on the mailing list.

I wrote test code for fmpz_lll_heuristic_dot and thought the tests pass, I still have to try out the different methods (including using _d_vec_dot_heuristic) to see which one is faster and gives more reliable results. A few helper functions were implemented use in the fmpz_lll_d and its test code. Two such functions are fmpz_mat_get_d_mat for converting an fmpz_mat into a d_mat and fmpz_mat_rq_d for RQ decomposition in the fmpz_mat module. However, these require including d_mat.h in fmpz_mat.h leading to a lot of compiling every time d_mat.h is modified. I don't know if this is an issue or not.

One thing I noticed when testing the LLL function was that when the approximate Gram matrix is used, all tests pass when using randntrulike, randajtai and randsimdioph (with the bitlengths specified in the test code, of course). However, there is a need to shift to arbitrary precision for some cases with randintrel (bits <= 200). However, when the exact Gram matrix of the lattice basis is used, randntrulike fails sometimes as well. Is this to be expected, or is there an error in the implementation?

This week I plan to make changes to the code according to the mentor comments and write fmpz_lll_d_heuristic function, test code and documentation.