The cost and accuracy of simulating complex physical systems using the Finite Element Method (FEM) scales with the resolution of the underlying mesh. Adaptive meshes improve computational efficiency by refining resolution in critical regions, but typically require task-specific heuristics or cumbersome manual design by a human expert. We propose Adaptive Meshing By Expert Reconstruction (AMBER), a supervised learning approach to mesh adaptation. Starting from a coarse mesh, AMBER iteratively predicts the sizing field, i.e., a function mapping from the geometry to the local element size of the target mesh, and uses this prediction to produce a new intermediate mesh using an out-of-the-box mesh generator. This process is enabled through a hierarchical graph neural network, and relies on data augmentation by automatically projecting expert labels onto AMBER-generated data during training. We evaluate AMBER on 2D and 3D datasets, including classical physics problems, mechanical components, and real-world industrial designs with human expert meshes. AMBER generalizes to unseen geometries and consistently outperforms multiple recent baselines, including ones using Graph and Convolutional Neural Networks, and Reinforcement Learning-based approaches.
We approach adaptive mesh generation as a supervised learning problem, factorizing the process into two key stages. First, a learnable model predicts a spatially-varying, scalar-valued sizing field. Second, a non-parametric mesh generator consumes this field to produce the final mesh. AMBER's central idea is to perform this prediction iteratively, allowing the model to gradually refine its own sampling resolution. Starting with a coarse, uniform mesh $M^0$, AMBER processes the corresponding graph to predict a vertex-level discrete sizing field. This collection of point-based predictions is then transformed into a smooth, continuous field using a linear interpolant, similar to a linear basis function in the FEM. The resulting continuous field then guides the generation of a new, more refined mesh, $M^{t+1}$.
A single AMBER refinement step can be expressed as
$$ M^{t+1} = g_\texttt{msh}\left(\Omega, \mathcal{I}_{M^t}\left(\sigma\left(\text{MPN}_{\theta}(G(M^t))\right)\right)\right)\text{.}$$
Here, the current mesh $M^t$ is first converted into a graph representation $G(M^t)$. The message passing network $\text{MPN}_{\theta}$ processes this graph to predict vertex-level scalars, which are transformed into a discrete sizing field using some transformation $\sigma$. This discrete field is then transformed into a smooth, continuous field by a linear interpolant, $\mathcal{I}_{M^t}$. The mesh generator, $g_\texttt{msh}$, uses this continuous field to produce the refined mesh for the next iteration, $M^{t+1}$. This iterative loop and the flexibility of the underlying message passing network allow AMBER to progressively focus its resolution on the most critical regions of a provided arbitrary geometry.
@article{freymuth2025amber,
title={AMBER: Adaptive Mesh Generation by Iterative Mesh Resolution Prediction},
author={Freymuth, Niklas and W{\"u}rth, Tobias and Schreiber, Nicolas and Gyenes, Balazs and Boltres, Andreas and Mitsch, Johannes and Taranovic, Aleksandar and Hoang, Tai and Dahlinger, Philipp and Becker, Philipp and others},
journal={arXiv preprint arXiv:2505.23663},
year={2025}
}