phiflow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations

phiflow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations

Reviewer 1: The short paper discusses the research on a differentiable 2d and 3d fluid simulator based on differentiable PDE, using auto diff framework in Python. It uses the reverse mode auto diff capabilities of existing deep learning frameworks to efficiently optimize a control force estimator. The contribution doesn’t deal with interaction, such as collision/boundary conditions. minor typo in line 20: framework that providers -> framework that provides

Reviewer 2: The major contribution of the paper is positioning Eulerian methods for PDEs within a deep learning framework. The resulting toolkit allows for modern ML methods to be applied to this large class of PDEs that includes Navier-Stokes. That’s really cool and definitely applicable to the workshop.

The paper dives into applying this framework with the objective being to demonstrate that it’s usable. However, it’s not easy to parse what’s going on given the lack of information about the experimental setup. The authors spent a lot of time setting up the problem in Section 3, but I don’t understand what is actually happening in Figure 3, or what are the test sets. As far as I can tell, the only times that ““shape”” is mentioned is in the captions for Fig 3 and Table 1, and it’s not explained at all there.

Another part missing from the paper is any comparison with other frameworks. There are others (as seen in a quick google search). I certainly understand that it’s difficult to fit that in in the page limits, but it seems like a really apt part to have in a paper about a framework.” There are clarity issues. In the third paragraph alone, there is “providers operators”, “learn to”, and an extra “,” after solvers. These aren’t make or break but they are jarring to the reader each time.

I think the paper is very well presented, and once released the framework will be a valuable tool for the community. Just a few minor suggestions to aid clarity:

“This functionality for time advancement by itself is not well-suited to solve optimization problems, since gradients can only be approximated by finite differencing in these solvers” Would be good to clarify this statement. Initially I thought this was referring to spatial gradients of u, but it seems this refers to gradients with respect to some control parameter. Better to be explicit and give an example to aid comprehension.

In Eq. 3, how is F(t) different from y(t) in Eq 1.? Just wondering what the motivation is for splitting this out.

Did I understand correctly that the framework is using standard reverse mode differentiation, which requires storing the inputs to each time-step during the forward pass? Or, is it using the continuous adjoint method (made popular by Neural ODEs). If it is the former then I wonder if memory usage is a concern for large volumes? I think it may be nice to also cite Neural ODEs to draw the distinction.