Sparse-Input Neural Network Augmentations for Differentiable Simulators
Sparse-Input Neural Network Augmentations for Differentiable Simulators
Reviewer 1
The authors propose a simulaton, in which various parameters are augmented as neural network outputs. Leveraging various optimizers, the simulator is able to compute gradients with respect to the neural scalars. The authors leverage a sparse-group lasso loss to learn minimally invasive parameters. Authors provide some experiments in the double-pendulum setting.
Overall this is very exciting and (to my knowledge) novel work. It would be good to seem more details on what exactly the differentiable physics that authors use are (the details of the model f). It would also be interesting to see experiments beyond double pendulum settings. “ The explanations for figure 1 could be a little clearer, especially explaining the forward dynamics and the integrator.
Reviewer 2
This submission puts forward a very reasonable and well-motivated algorithm to combine an imperfect forward model with a non-invasive neural network, with the purpose of closing the `sim to real’ gap. The submission does not provide too many details about the specific aspects of their method, besides a mention on sparsity-promoting regularisation to make the neural network corrections localized to few variables. Numerical experiments are reduced to two toy ‘proof-of-concept’ setups, learning air resistance in golfing, and a double-pendulum with unknown damping forces.
While the technical details are possibly too scarce to judge the value of this contribution, this reviewer finds the perspective taken by the authors very reasonable, and believes that the workshop will benefit from their participation.
Reviewer 3
This paper present a differentiable simulator building on local+global gradient solvers. The trick is to sparsify the dependance of learned networks on only the input parameters that matter the most on the fidelity of the output termed neural augmentation.
It’s not clear if this sparsification happens while training itself on ground truth data and what is the computation burden vs gain in performance at inference time by doing this.
Preliminary experiments with ball in different media and pendulum show promising results compared to vanilla NN. How do the other technique in table 1 fair in comparison? Would be interesting to see how the setup and implementation scales to more complex problems involving contact, articulated bodies, deformable objects, etc.