We present a general approach for simulating and controlling a human character that is riding a bicycle. The two main components
of our system are offine learning and online simulation. We simulate the bicycle and the rider as an articulated rigid body system.
The rider is controlled by a policy that is optimized through offine learning. We apply policy search to learn the optimal policies,
which are parameterized with splines or neural networks for different bicycle maneuvers. We use Neuroevolution of Augmenting
Topology (NEAT) to optimize both the parametrization and the parameters of our policies. The learned controllers are robust enough
to withstand large perturbations and allow interactive user control. The rider not only learns to steer and to balance in normal riding situations, but also learns to perform a wide variety of stunts, including wheelie, endo, bunny hop, front wheel pivot and back hop.
We thank the anonymous reviewers for their helpful comments. We
want to thank Yuting Ye for her suggestions and all the members
of Gatech Graphics Lab for their help on this work. We also want
to thank GVU center of Gatech. This work was funded by NSF
CCF-811485, IIS-11130934 and Alfred P. Sloan Foundation.