Tuesday 9 June 2015

Week 2

In the second week, I implemented the lift_newton function that is used to lift coprime index 1 factorizations. plug_delta, coeffx, coefs_operator and coefs_poly were the helper functions used in the process. coefs_operator takes a polynomial and returns an operator with the given valuation, while coefs_poly takes an operator and returns a polynomial after extracting its relevant part. plug_delta simply converts a polynomial to a differential operator by substituting δ = xD in place of x. coeffx(f, e) returns the coefficient of x^e in f while simultaneously substituting x in the place of δ. The initial version of lift_newton that I implemented was buggy, but thanks to the review on the mailing list, I was able to correct it. The plan now is to tie up any loose ends in the functions finished so far which all deal with factorization using the Newton method. If there are no more problems with them, then I'll go on to implement the functions related to factorization using a Riccati solution : factor_riccati and ramification_of. That's all from me. Good-bye, until next time.

Tuesday 2 June 2015

GSoC 2015 - Week 1

My proposal "Factorization of LODOs in FriCAS - the van Hoeij algorithm" has been accepted for Google Summer of Code 2015. So, I'll again be working under lmonade this summer (and I'm glad to be working under the same umbrella organization). My mentors for this project are Waldek Hebisch and Ralf Hemmecke. The official coding period began last week (May 25) and so I have started to code. The source code can be found here.

I've finished the implementation of the newtonpolygon function that returns the extreme points of the Newton polygon, the associated slopes and Newton polynomials of an operator.
Further, I also implemented the factor_newton function that performs coprime index 1 factorizations for which the op_with_slope function was a helper. However, the lifting step of the algorithm is still to be done and that is what I'm working on at present. So, the plan for this week is to finish the algorithm for coprime index 1 factorizations by implementing lift_newton and coefs_operator. I was a little stuck about how to actually implement the lifting algorithm, but Waldek clarified my doubts to a large extent on the fricas-devel mailing list. I'm still not completely confident but am able to work on the functions, so I guess I'll finish the implementation first and then modify it according to the comments received. Okay, that's it for now. See you next week!

Wednesday 20 August 2014

Week 11 - 'pencils down'

Sorry for not updating this blog for the last couple of weeks. I forgot about this after the semester began. I'll use this blog post to summarize the project progress in this month. Before that, however, I'd like to note that the current module does indeed perform LLL, LLL with removals and ULLL which was the primary goal of the project.

So, starting off where I left in the previous post, the augmentation of the identity to the basis matrix actually plays a part in bettering the numerical stability of ULLL. Hence, I modified the ULLL function to be similar to the one in flint-1.6.
Also, the default LLL functions (which are essentially wrappers for ULLL) were added. The LLL-reducedness checkers were also changed to implement the certification method introduced by Villard. It will not always work, but if it does the result is guaranteed. It is efficient, and much faster than using the provable precision in the L^2 algorithm or using exact arithmetic for testing.

The Gram matrix version of the provable is_reduced functions was also added.
This was followed by an implementation of a modified LLL reduction algorithm due to Storjohann which has better complexity in terms of the lattice dimension. However, it didn't fare well against the other LLL implementations (specifically classical LLL) during profiling. The mpf versions of the is_reduced functions were removed because they weren't proven and mpfr versions were written.
Adding a function which uses mpf and epsilons to emulate the behaviour of changing the rounding mode is a possibility as mentioned on flint-devel. Finally, the documentation was updated and now should (hopefully) explain about all the strategies and types used in this module (along with the function descriptions of course).

I'd like to thank my mentors, William Hart and Curtis Bright, for their valuable time, support and patience. This project wouldn't have been possible without their help. I'd also like to thank Fredrik Johansson for reviewing some of my code and Burcin Erocal from the lmonade organization for his advice. I'm grateful to the Google OSPO team for running the GSoC program. This has been a great summer and contributing to FLINT has been a tremendous learning experience for me.

Monday 28 July 2014

Week 10

This week, I removed the redundant store_trans parameter from the context object. Now, the capturing matrix is always updated unless the user passes NULL as the value. Also, in the special case that the user inputs a d x d identity and the number of columns of the input basis is greater than the number of rows (i.e. the embedding dimension is greater than the lattice dimension), I avoid updating the vectors during reduction itself and instead do a matrix multiplication at the end.

Also, as the mpf and wrapper functions of the LLL subroutines are supposed to guarantee that the output is indeed LLL-reduced, we need to check this in the most efficient way. This means that a fp test for reducedness should be used prior to using the exact arithmetic version (which is slower). Thus, I've added is_reduced functions in the module which first test using doubles, then mpfs if the matrix is certified non-reduced in the first test and finally, fmpq.

An initial implementation of the ULLL function was also added. It is different from the one in flint-1.6 because it does not perform any adjoining operations to the matrix. Instead, the option to store the unimodular transformations is utilised here. Also, the original did not use recursion on the truncated data, but the current code does. However, it needs to be tested.

This week, I plan to add test code and documentation for the ULLL function and document the functions for checking LLL-reducedness.

Tuesday 22 July 2014

Week 9

This week, I updated the removals code to use a heuristic lower bound, mentioned by Curtis on flint-devel, on the final GS norm while removing the last vector during the execution of the reduction algorithm (after a failed Lovasz test involving kappa = d - 1). This was required because the proof of a theorem in the L^2 paper assumes that the final vector is LLL-reduced while deriving the error bound on the accuracy of the norm. This, of course, doesn't matter in the case of standard LLL but matters here because we need to be sure about the accuracy, lest we may remove something useful.

Besides this, I also unified the code for the cases where the exact Gram matrix is computed or input as mentioned in the todo section of my previous post. This required factoring out the row exponents from the Gram matrix rather than the basis itself, because the latter is not always be available (i.e. fl->rt == GRAM).

Also, I changed the code dealing with the matrix capturing the unimodular transformations to not make any assumptions regarding its column dimension. Earlier, I was assuming U to be a d x d matrix which was to be updated to satisfy the relation B* = UB where B* is the basis obtained by LLL-reducing B, i.e. U was the change-of-basis matrix. However, now U can be any matrix with the same number of rows as B.

I think the main implementation of LLL and LLL with removals is completed now, modulo a few (hopefully minor) changes. So, I plan to at least start working on the ULLL function this week.

Monday 14 July 2014

Week 8

The eighth week of GSoC involved a lot of debugging to find the reasons for the bad behaviour of the removals function. In theory, the last vector can be removed from the basis if its squared GS length becomes greater than the bound at any point during the execution of the algorithm. However, it is not so straightforward in practice if the GSO is known only approximately. I think this may be a reason why the version in flint-1.6 removed vectors at the end as the algorithm seems to show much better behaviour in this case, i.e. the norms were more accurate.

I added the wrapper function for  LLL with removals optimised for knapsack lattices. Test code for this was also written.  Knapsack LLL differs from the textbook version in the fact that it performs early size reductions on the input basis occasionally which speeds up things in the knapsack case. Speaking of LLL with removals, the code was modified to remove the numerical inaccuracies plaguing it. Earlier, due to a floating point approximation of the Gram Schmidt orthogonalisation being used, the norm was incorrectly flagged as being greater than the bound which led to the removal of some useful vectors. The documentation was also updated and is now up to date with the module. Thus, the prerequisites for ULLL have now been completed.

This week I plan to unify the code for performing LLL on the input matrix in the cases that it is a lattice basis and an exact Gram matrix is to be used for computing the GSO or it is the Gram matrix itself.

Tuesday 8 July 2014

Week 7

This week, I added support for LLL with removals on Gram matrix. This finds application in vector rational number reconstruction. Also, the inaccuracy checks (incorrect Lovasz tests and too many Babai loops) were improved to be similar to those used in fpLLL.

Also, the LLL function specialized to knapsack-type lattices, for which the Babai functions were implemented last week was added. It performs early size reductions which tend to make things faster for knapsack problem type lattice bases. It is also a prerequisite for the ULLL with removals function. Another feature implemented was early removal of vectors in LLL with removals. Now, the vectors whose squared GS lengths are greater than the input bound are removed from the basis during the reduction algorithm itself to avoid unnecessary overhead involved in keeping them updated during the computations.

This week, I plan to write the wrapper function for knapsack LLL with removals and fix any loose ends which may be remaining. The plan is to get the existing module to ready before I start working on the actual ULLL algorithm itself.