Numerical optimization techniques have become of primary importance for the industry over the last decades. A classical approach to reduce the computational complexity is to use surrogate modelling via Gaussian Process (GP) regressors, or others, trained to interpolate the performance landscape given a low dimensional parametrization of the design space. This interpolator is then used as a proxy for the true objective to speed-up the computation inside an optimization loop, which is referred to as Kriging in the literature. However, the regressors sometimes offer poor performance and are specific to a particular parameterization. Therefore pre-existing computed simulation data cannot be easily leveraged.
Like other learning-based methods Geometric Convolutional Neural Networks can be used to build surrogate models of numerical solvers. However, it suffers none of the drawbacks of previous surrogate methods, such as the ones mentioned above. It is agnostic to the shape parameters as it processes directly the mesh representation of the design. Hence, optimization or design parameters are decoupled from the learning problem and a single predictor can be trained with a large amount of data and used for many optimization tasks. Unlike Kriging methods, the engineer does not have to choose and stick to a specific parametrization from the beginning to the end of experiments. Furthermore, it can leverage on transfer learning abilities of Deep models to blend simulations from multiple sources and with multiple fidelities.
Geometric Neural Networks can provide decisive accuracy gains compared to previously mentioned methods. Firstly, it is able to learn based on multiple data outputs and use the correlations between quantities to obtain the best result. Secondly, the convolutions used in Geometric Neural Networks, are particularly suited to the accurate prediction of complex local field quantities such as deformation, temperature or pressure.
Using the physical space as a reference for the learning makes it possible to exploit any raw, unprocessed data source for training a model. Therefore, one can exploit an existing base of simulations or reuse data across design iterations, across projects or across multiple teams. This is a key enabler to improve performance of approximation models, while generating an order of magnitude fewer simulations in the long term. This new approach is breaking silos between projects to open the way to a unified model that can be built and used once and for all.
The NCS/Expert software, will be used to support the practical examples. This software is dedicated to simulation engineers with a scientific core, data-scientists, optimization engineers or methods engineers who want to fast-track their adoption of Deep-Learning methods.
This webinar is available for free to the engineering analysis community, as part of NAFEMS' efforts to bring the community together online.
|