Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, it is very good. You can solve problems incrementally. Change a few params, add a few more constraints, and rerun using the previous result.


What you mentioned looks like something quite common for linear programming software.

E.g., think linear programming tableaux and appending a new constraint with its own, new artificial variable. Then do elementary row operations to zero out the row of the new constraint in the columns of the old basic variables. Then do elementary row operations to zero out the new artificial column in all rows except the row of the new constraint. Then do simplex algorithm pivots to get the new artificial variable out of the basis.

Changing costs is also easy and standard in convenient little operations called post-optimality.

There are more details, but covering all of those would need much of a course in linear programming.

E.g., the IBM Optimization Subroutine Library (OSL) is quite nice work, likely heavily from Ellis Johnson when he was still at IBM before he went to Georgia Tech where George Nemhauser has long been and does what you mentioned. Once I wrote some 0-1 integer linear programming software with some Lagrangian relaxation based on the OSL -- it was nice.

From Bixby, the guy that did C-PLEX, and two Georgia Tech students, etc., I would expect more than the usual or just the OSL! I'd guess that they have some good results in integer programming, numerical stability, much larger problems, exploitation of multi-core processors, and more.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: