For a long time I’ve been fascinated with optimal control and the SpaceX landing algorithm, called G-FOLD (described in this paper by Açıkmese, Carson, and Blackmore)
So, in the pursuit of landing a small scale “model” rocket, I wrote replicated the algorithm and hooked it into Kerbal Space Program as a physics testbed.
The code is available open-source (GPL License) at my GitHub repo.
As mentioned in the video, guidance is optimal (solution via convex optimization), but the attitude control and path following (the same algorithm) are non-optimal (proportional integral derivative control).


Basically, the G-FOLD algorithm generates a guidance solution (where to go, how to throttle, how to point), but does not know anything about how long it will take for the vehicle to respond to attitude control inputs. The attitude controller / path follower algorithm then does feedback control. This one is constantly trying to strike a balance between purely following the G-FOLD solution, and staying on the G-FOLD solution’s path.
Small perturbations at the beginning become big differences down the line, so the attitude controller tries to match the G-FOLD path.
Optimally, (hehe) the attitude controller should know all about the response times and overall angular and translational kinematics and dynamics of the rocket, and compute the perfect control inputs. Room for growth!
I attempted to write a convex optimization attitude controller as well, but ran into second-order cone programming issues (got stuck on convexifying the angular dynamics IIRC). Email me if you have a solution to that!