Global optimization for data assimilation in landslide tsunami models

https://doi.org/10.1016/j.jcp.2019.109069Get rights and content

Highlights

  • We develop a generic data assimilation framework for landslide tsunami models.

  • The data assimilation problem is posed in a global optimization framework.

  • Parallel and efficient global optimization algorithms are developed.

  • We assess the identifiability of model parameters for a landslide tsunami model.

  • The developed machinery is tested with real laboratory data.

Abstract

The goal of this article is to make automatic data assimilation for a landslide tsunami model, given by the coupling between a non-hydrostatic multi-layer shallow-water and a Savage-Hutter granular landslide model for submarine avalanches. The coupled model is discretized using a positivity preserving second-order path-conservative finite volume scheme. Then, the data assimilation problem is posed in a global optimization framework. Later, multi-path parallel metaheuristic stochastic global optimization algorithms are developed. More precisely, a multi-path Simulated Annealing algorithm is compared with a multi-path hybrid global optimization algorithm based on coupling Simulated Annealing with gradient local searchers.

Introduction

The goal of this work is twofold. On the one hand, assessing the feasibility of performing data assimilation for models of tsunamis generated by submarine landslides, when only information of the fluid free surface is available. In other words, we aim at checking whether the data assimilation problem is well posed. This question is also known as the identifiability of model parameters. On the other hand, if the former is possible, we also aim at developing a generic data assimilation framework based on parallel and efficient global optimization algorithms which can deal with landslide tsunami models.

The tsunami hazard modeling is of great importance to prevent and forecast the consequences of such events, as they can cause a large number of casualties and huge financial losses. Tsunamis can be generated mainly by earthquakes, storm surges or landslides (subaerial or submarine). The majority of them are caused by an offshore earthquake that pushes the ocean up or down. Nevertheless tsunamis can also be generated in other ways. Underwater landslides, also known as submarine mass failures (SMF), which might accompany an earthquake or occur independently, are a classic example. Traditional warning systems completely miss tsunamis from those types of sources. Once we have a model for these phenomena, the correct calibration of its parameters is of key importance for the accurate simulation of the tsunami. The calibration could be even done in real time just after the tsunami occurrence, by feeding the calibration machinery with the measures taken by the tide-gauges in the ocean. After this calibration stage, the model can be used in order to predict the trajectory of the tsunami and the impact areas.

Several types of models can be found in the literature for modeling landslide tsunamis. Their development focuses in three aspects: a physical model for landslide material, a hydrodynamic model that simulates the generation and propagation of resulting waves, and the coupling between both. The hydrodynamics of landslide-induced tsunamis has been extensively studied using numerical models based on different levels of simplification.

The simplest model contemplates the landslide as a rigid solid with fixed landslide shape [1]. Another approach to simulate landslide-induced tsunamis is to consider both the landslide and the water as two different fluids [2], [3], [4], [5], [6], [7]. This approach allows the landslide to deform, and to couple the landslide and the fluid. Although the two-fluid models described above can be reasonably successful in predicting tsunami wave generation, they may fail to determine the landslide motion from initiation to deposition.

Initial steps towards development of granular flow-based models for landslide behavior have usually been based on depth-integrated models pioneered by Iverson [8], Savage and Hutter [9], and others. These models were initially developed for application to shallow subaerial debris flows. In [10] a two-layer Savage-Hutter type model was proposed to simulate submarine landslides, where the hydrostatic pressure assumption is assumed to derive the model.

In [11] a two-phase model for granular landslide motion and tsunami wave generation is developed. On the one hand, the granular phase is modeled by a standard Savage-Hutter type model governed by Coulomb friction. On the other hand, the tsunami wave generation is simulated using a three-dimensional non-hydrostatic wave model. The latter is capable of capturing wave dispersion efficiently using a small number of discretized vertical layers.

In this article we follow a similar approach. A two-phase model is also considered, however the three-dimensional non-hydrostatic model is replaced by the multi-layer non-hydrostatic model recently proposed in [12]. This framework is briefly discussed in Section 2.

The previous model depends on a set of parameters that need to be calibrated in order to match real data. Note that, having a good model and a strong and reliable numerical method for solving the problem, is as important as performing a good parameters adjustment of the model according to physical measures. In other words, a good model together with a good numerical method, can lead to totally wrong results with poorly calibrated parameters. Data assimilation is the tool for embedding reality in numerical simulation. Together with mathematical modeling and development of proper numerical methods, it could be considered as the third leg supporting the numerical simulation of processes in science and engineering. Data assimilation allows the model to learn and profit from real measured data. It is of key importance, among others, in atmospheric models for weather forecasting [13] and models for geophysical fluids [14]. The pioneering work about the mathematical basis of data assimilation and control was carried out by Lions in [15].

Our work follows the classical approach to calibrate the parameters of a model. They are adjusted in such a way that the behavior of the model approximates, as closely and consistently as possible, the observed response of a hydrologic system over some historical period of time. Ultimately, the best parameters are those minimizing the simple least square objective function of the residuals, which accounts for the differences between the model-simulated output and the measured data. This is the right approach as long as the mathematical model is correct (realistic enough), and physical data are measured without error. The uncertainty in the model prediction will be due to the uncertainty in the parameter estimates.

There is a separate line of research [16] arguing that models have structural errors arising from the aggregation of spatially distributed real-world processes into mathematical models. Besides, due to this aggregation process, model parameters usually do not represent directly measurable entities. Therefore, they must be estimated using measurements of the system inputs and outputs, thus adding another source of errors. As a consequence, during the calibration process one should also take into account input, output and model structural errors. Several methods were firstly proposed to deal with model structural and data errors, like the Bayesian approach, Recursive Parameter Estimation algorithms, multiobjective calibration or stochastic input error models. Bayesian methods treat model parameters as probabilistic variables, in contrast with Frequentists approaches which consider model parameters fixed but unknown. Examples of Bayesian methods in hydrology are the Generalized Likelihood Uncertainty Estimation framework of Beven and Binley [17] and the Bayesian Recursive Estimation approach of Thiemann [18]. Recursive Parameter estimation algorithms help to identify model structural flaws by reducing the temporal aggregation associated with traditional batch processing, like PIMLI and recursive Shuffled Complex Evolution Metropolis algorithms (SCEM-UA) [19], [20]. Multiobjective frameworks in order to better understand the limitation of the models, use complementary criteria in the optimization procedure and analyze the trade off in the fitting of these criteria; MOCOM [21] and MOSCEM-UA [16] being examples of these algorithms. Finally, realistic stochastic input error models, like the Bayesian Total Error Analysis of Kavetski, only account for input errors.

These previously discussed methods were not successful to account for all the referred sources of uncertainty in hydrologic modeling, i.e. parameter, input, output and structural model errors. Later, sequential data assimilation (SDA) techniques continuously update the parameters of the model when new measurements are available. This is intended for improving the model forecast and evaluate its accuracy. Kalman and extended Kalman filters represent SDA approaches for linear and nonlinear models, respectively. Recently, Vrugt et al. in [16] enrich the classical calibration approach with SDA techniques. The authors developed the so-called simultaneous parameter optimization and data assimilation (SODA) method. This strategy combines the search efficiency and the explorative capabilities of the Shuffled Complex Evolution Metropolis algorithm [20], with the power of the ensemble Kalman filter. Therefore, the blending accounts for the parameter, input, output and model structural uncertainties in hydrologic modeling.

Another approach aiming to reduce the uncertainty of models and improve their prediction skills consists on identifying the sensitive parameters and then focus on reducing the error of these delicate parameters [22]. For example, in [23], Yuan Shijin et al. studied the sensitivity of the wind stress, the viscosity coefficient and the lateral friction for the simulation of the double-gyre variation in the Regional Ocean Modeling System [24]. This model can be used to simulate global waters of any size from basins to oceans. Their sensitivity study was carried out not only for single parameters, but also for the combination of multiple parameters. To this end, the authors solved the Conditional Nonlinear Optimal Perturbation related to Parameter (CNOP-P) problem [25] with the help of a modified Simulated Annealing (SA) algorithm. These works exploring optimal parameters using sensitivity experiments are not feasible for models with large number of parameters. Indeed, the number of necessary experiments increases exponentially with the involved number of model variables. A study of the sensitivities of the parameters for a simplified version of the model we are considering in this work was carried out by means of Multi-Level Monte Carlo in [26], the fluid model component being hydrostatic with just one fluid layer.

In a general setting, the data assimilation problem for a given model, can be posed as an unconstrained global optimization problem in a bounded domain. Stochastic global metaheuristic algorithms are useful to solve this kind of problems. They have the advantage of needing little information of the function, and also allow to escape from local optima. Their main disadvantage is the slow rate of convergence, which is typical of Monte Carlo algorithms. Classical well known examples of these methods are Simulated Annealing [27], [28], Particle Swarm (PS) [29], [30] or Differential Evolution (DE) [31]. Conversely, local optimization algorithms are deterministic and use more information of the function, thus being faster. Their main disadvantages are that, in general, they require some regularity of the cost function, and even more important, they do not guarantee reaching the global optimum, as they can get trapped into a local minimum. They can be gradient free, for example Pattern Search (PS) [32] or Nelder-Mead (NM) [33]; or gradient based, like steepest descent, Newton method, Conjugate Gradient (CG), Nonlinear CG (NCG) [34], [35] or Quasi-Newton methods, for example BFGS [36], [37], [38], [39], L-BFGS [40] or L-BFGS-B [41]. One idea to profit from the good properties of stochastic (global) and deterministic (local) algorithms, is to hybridize them: this can be done by nesting the local search inside the global algorithm. One example is the Basin Hopping (BH) algorithm [42], [43], [44]. In this work, in order to calibrate the tsunami model, we follow this idea, using an in-house developed parallel multi-path version of the BH algorithm.

Data assimilation for shallow-water models has been addressed in many works. Usually gradient based local optimization methods, like the simplest steepest descent method, have been used to solve the resulting optimization problem. Due to the high computational cost, the gradient is computed by solving the adjoint problem, either by solving directly the adjoint system or by computing the adjoint using automatic differentiation (AD) [45], [46]. For example, in [47] the identification of Manning's roughness coefficients in shallow-water flows is performed. The authors compared three local optimization algorithms: a n-trust region method, L-BFGS and L-BFGS-B minimizers. The gradients are computed by solving the adjoint equations. In [48] the variational data assimilation method (4D-VAR) is presented as a tool to forecast floods, in the case of purely hydrological flows. The cost function is a modification of the shallow-water equations to include a simplified sediment transport model. The steepest descent algorithm is then used to find the minimum. The initial and boundary conditions are calibrated. Besides, the gradient of the cost function is analytically computed by solving the adjoint equations of the model. In [49] the authors developed a 4D-VAR combining remote sensing data (spatially distributed water levels extracted from spatial images, SAR) and a 2D shallow-water model. They identified time-independent parameters (e.g. Manning coefficients and initial conditions) and time-dependent ones (e.g. inflow). In [50] the authors applied the technology developed in [49] in order to derive water levels with precision from satellite images of a real event. In [51] the authors presented a method to use Lagrangian data along with classical Eulerian observations. A variational data assimilation process for a river hydraulics using a 2D shallow-water model was considered. The trajectories of the particles advected by the flow and extracted from video images were employed. In all the cited works AD is applied for computing the gradients, and the data assimilation is performed using gradient local optimization algorithms.

Data assimilation for tsunamis forecasting and early warning is a very challenging problem. On top of that some data are even unknown, for example the geometry of the landslide or the bottom deformation related to earthquake. Real time data is available in Tsunami Early Warning Systems (TEWS). An example is the tide-gauges network of Deep-Ocean Assessment and Reporting of Tsunamis (DART) from National Data Buoy Center of the NOAA, or similar systems from other countries [52]. Tsunami buoys are not only intended to display the occurrence of the tsunami, but also to provide real time data that can be assimilated into the tsunami warning system. Thus, the accuracy of the tsunami forecasting can be improved. Real time data assimilation in tsunami models is mostly done using optimal interpolation (OI) and tsunami Green functions, which are calculated in advance with linear tsunami propagation models [53], [54]. Another alternative assimilation method, is to use Kalman filter techniques [55], [56] for wave field reconstructions and forecasts [57], [58]. In [59] data assimilation is done using a OI algorithm to both the real observations and virtual stations, in order to construct a complete wave front of tsunami propagation. In [60] tsunami data assimilation of high-density offshore pressure gauges is performed. In [57] a Kalman filter technique is proposed and compared with OI. In [61] the assimilation of Lagrangian data into a primitive equations circulation model of the ocean at basin scale is performed. A four-dimensional variational technique and the adjoint method were used. In [62] retrospectively data assimilation is applied to the tsunami generated in 2011 off the Pacific coast by the Tohoku Earthquake (Mw 9.0). The data assimilation is done using an algorithm of near-field tsunami forecasting with tsunami data recorded at various offshore tsunami stations: these measures were taken between 5 and 10 minutes before the tsunami reached the coastal tide-gauge stations nearest to its origin.

Nevertheless, data assimilation in landslide generated tsunamis is not so well-developed. In this work we propose to use global optimization algorithms, that in general produce better results than the local ones. In fact, many times the calibrated parameters do not correspond to the global minimum of the involved cost function because the considered local optimizer got stuck in a local minima far from the global solution.

Our work lies in the same vein of the recent works [63], [64] of Sumata et al. In [64] the authors applied a global minimization algorithm in order to calibrate an Arctic Sea Ice-Ocean model. Their approach consists on minimizing, with a genetic algorithm, a cost function corresponding to the model-observation misfit of three sea ice quantities: the sea ice concentration, the drift and the thickness. The similarities between this work and our approach are twofold. The first one is to use a bound constrained global stochastic minimization algorithm. The second is the method to assess on the optimality of the achieved solution by using a pool of independent and randomly initialized minimization experiments. Nevertheless, the approach we are proposing differs from their strategy in several features. First of all, our goal is to calibrate a tsunami model involving less parameters than the 15 model variables of the sea ice-ocean model calibrated in their article. Besides, the different nature between this model and the tsunami model we are looking at, enforces a different optimization window: a large one (two decades) in their work versus a small one (a few hours at most) in our sketch. On top of that, Sumata et al. performed the optimization of the cost function on a discrete search space, while our approach, allowing a continuous parameter domain, is richer.

Based on their previous work [63], Sumata et al. in [64] support, as our work does, the statement that gradient descent local minimization algorithms are likely to get stuck at local minima for these complicated cost functions. Therefore, the authors impose the need to use stochastic global minimization algorithms. In fact, in [63] two types of optimization methods were applied to the calibration of a coupled ocean-sea ice model, and a comparison was made to assess the applicability and efficiency of both methods. One was a gradient descent method based on finite differences for computing the gradient, while the other was a genetic algorithm. Also a parallel implementation was carried out to speed up the optimization process. In the case of the gradient descent method, each component of the gradient was computed in parallel. They precisely conclude that the global optimization GA is preferred. In fact, it yields a better optimum, since the gradient local optimizers could get trapped in local optima. This could happen even if several launches of the gradient algorithm are launched in a multi-start fashion. This statement exactly coincides with our forthcoming conclusions in Sections 4.1 and 4.2 (see Fig. 4, Fig. 11).

In our paper, we overcome this disadvantage by proposing for first time in this field, the use of a parallel hybrid local-global minimization algorithm. More precisely we develop a BH like algorithm. BH consists on hybridizing SA and local gradient searchers, allowing to benefit from both worlds: the global convergence properties of SA and the speed of local optimizers. We go even further by proposing a parallel version of the BH algorithm. For the local searcher ingredient of BH, we use a bounded version of the L-BFGS algorithm employed in [63], namely the L-BFGS-B algorithm. This setting is able to increase the convergence speed and the success rate of BH. The multi-start technique performed in [63] can be seen as computing only one temperature stage of our multi-path BH algorithm. Another advantage of our algorithm is its embarrassingly parallel nature, as we can map each search path to a different parallel thread. In [63] each CPU thread computes one component of the gradient, while in our case, each thread is responsible of one L-BFGS-B path. We show using an analytical test, that this algorithm improves the multi-start technique, as it is always able to find the global optimum. Besides, in our article we compare the efficiency of this multi-path BH with a multi-path SA. Additionally, we show that the use of gradient searches increases the convergence speed of a multi-path SA. As mentioned before, a SA algorithm was also used in [23] to effectively solve the CNOP-P of ROMS.

The organization of this paper is as follows. In Section 2 we pose the data assimilation problem. In Section 2.1 we describe the cost function, which is given by the measure of the mismatch between the free surface laboratory data and the computed one, that depends on the parameters we want to assimilate. The optimization of this cost function is a hard problem. On the one hand, the evaluation of the cost function is an expensive computational problem, because it relies in the solution of a time dependent system of partial differential equations. On the other hand, this data assimilation problem gives rise to a global optimization problem. In Section 2.2 we briefly describe the two-phase tsunami model and give some references about the numerical scheme we use. The physical parameters of the system, that need to be calibrated, are the ratio of densities between the grain and the fluid, the Coulomb friction angle, and the Manning friction coefficient. The evaluation of the cost function requires a numerical solution of this two-phase model, computed for a given set of parameters.

In Section 3, we recall the global optimization algorithms that we will use: multi-path Simulated Annealing and multi-path Basin Hopping algorithms. Both algorithms were proposed in [65], [66] for accelerating the convergence of SA and BH respectively. They are based in performing synchronized parallel Metropolis searches, or parallel gradient based local searches. The methods were assessed against hard benchmarks in the global optimization field. Besides, the algorithms have been successfully applied to the calibration of models in finance, even in the case where the costly Monte Carlo method is the only alternative to price the involved financial products [67]. In this work we apply these strategies for data assimilation in landslide tsunami modeling. One of the objectives of this article is to show that this type of algorithms can be successfully applied for the parameters calibration on challenging geophysical problems.

In Section 4, some numerical experiments are presented. Section 4.1 is devoted to validating the methodology using synthetic tests. Indeed, the model is run for fixed sets of parameters, and files with the free surface information are generated. Then, this information is considered as data coming from laboratory. Next, by global optimization in a large domain, the parameters that were used to generate those data are recovered. After validating the methodology, in Section 4.2 the developed technique is applied to perform the data assimilation of real laboratory data.

Section snippets

Data assimilation problem

In general, the cost function measures the error, computed in some norm, between the real data and the solution produced by the numerical model. The model will depend on a set of parameters. For example, in the case of a one layer shallow-water model, they can be: one Manning coefficient for the whole domain, or also several Manning coefficients, one per subdomain; the initial conditions; the boundary conditions, etc. These parameters can be even time dependent (boundary conditions, for

Multi-path BH global optimization

In this section we describe the optimization algorithms multi-path SA (SAM) and multi-path BH (BHM). They can be seen as a modification of the sequential BH algorithm, introducing a parallel multi-path searching technique.

The BH algorithm is a hybrid between the Metropolis algorithm and some kind of gradient local optimization method. Therefore the minimizer profits from the speed and accuracy of the local optimizer, while retaining the global convergence properties of the stochastic one. The

Numerical results

In this section we present two sets of numerical examples. The first one in Section 4.1 is a pool of synthetic tests with known solutions. They are used to validate the proposed algorithms and methodology, to discuss about the identifiability of the problem, and to show the convergence results and computational speedup. The second one in Section 4.2 shows an application of the proposed methodology to the assimilation of real laboratory data.

The laboratory experiment that will be calibrated in

Conclusions

We have shown that hybrid multi-path global optimization algorithms are suitable for solving the data assimilation problem for SMF models. Besides, we have assessed the identifiability of the model parameters if only data of the free surface is available.

Additionally, we have discussed that using a local optimizer or a multi-start technique produces poor results: global optimization algorithms are more suitable for this kind of problems. We have also exhibited that the problem can be solved

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

The authors want to acknowledge the designers of the experiment [83], for making the data publicly available. The authors also wish to thank the anonymous reviewers for their through review of the article and their constructive advises.

This research has been financially supported by Spanish Government Ministerio de Economía y Competitividad through the research projects MTM2016-76497-R and MTM2015-70490-C2-1-R.

References (87)

  • X. Lai et al.

    Assimilation of spatially distributed water levels into a shallow-water flood model. Part I: mathematical method and test case

    J. Hydrol.

    (2009)
  • R. Hostache et al.

    Assimilation of spatially distributed water levels into a shallow-water flood model. Part II: use of a remote sensing image of Mosel River

    J. Hydrol.

    (2010)
  • Y. Wang et al.

    Data assimilation with dispersive tsunami model: a test for the Nankai Trough

    Earth Planets Space

    (2018)
  • J. Li et al.

    On numerical properties of the ensemble Kalman filter for data assimilation

    Comput. Methods Appl. Mech. Eng.

    (2008)
  • A. Narayan et al.

    Sequential data assimilation with multiple models

    J. Comput. Phys.

    (2012)
  • A.M. Ferreiro et al.

    SABR/LIBOR market models: pricing and calibration for some interest rate derivatives

    Appl. Math. Comput.

    (2014)
  • C. Escalante et al.

    Non-hydrostatic pressure shallow flows: GPU implementation using finite-volume and finite-difference scheme

    Appl. Math. Comput.

    (2018)
  • J. Adsuara et al.

    Scheduled relaxation Jacobi method: improvements and applications

    J. Comput. Phys.

    (2016)
  • I.M. Navon

    Practical and theoretical aspects of adjoint parameter estimation and identifiability in meteorology and oceanography

    Dyn. Atmos. Ocean.

    (1998)
  • S.T. Grilli et al.

    Tsunami generation by submarine mass failure. I: Modeling, experimental validation, and sensitivity analyses

    J. Waterw. Port Coast.

    (2005)
  • A. Skvortsov et al.

    Numerical simulation of the landslide-generated tsunami in Kitimat Arm, British Columbia, Canada, 27 April 1975

    J. Geophys. Res., Earth

    (2007)
  • S.M. Abadie et al.

    Numerical modeling of tsunami waves generated by the flank collapse of the Cumbre Vieja Volcano (La Palma, Canary Islands): tsunami source and near field effects

    J. Geophys. Res., Oceans

    (2012)
  • J. Horrillo et al.

    A simplified 3-D Navier-Stokes numerical model for landslide-tsunami: application to the Gulf of Mexico

    J. Geophys. Res., Oceans

    (2013)
  • S. Assier Rzadkiewicz et al.

    Numerical simulation of submarine landslides and their hydraulic effects

    J. Waterw. Port Coast.

    (1997)
  • R.M. Iverson

    The physics of debris flows

    Rev. Geophys.

    (1997)
  • S.B. Savage et al.

    The motion of a finite mass of granular material down a rough incline

    J. Fluid Mech.

    (1989)
  • E. Fernández-Nieto et al.

    A hierarchy of dispersive layer-averaged approximations of Euler equations for free surface flows

    Commun. Math. Sci.

    (2018)
  • E. Kalnay

    Atmospheric Modeling, Data Assimilation and Predictability

    (2003)
  • J. Lions

    Optimal Control of Systems Governed by Partial Differential Equations

    (1971)
  • J.A. Vrugt et al.

    Improved treatment of uncertainty in hydrologic modeling: combining the strengths of global optimization and data assimilation

    Water Resour. Res.

    (2005)
  • K. Beven et al.

    The future of distributed models: model calibration and uncertainty prediction

    Hydrol. Process.

    (1992)
  • M. Thiemann et al.

    Bayesian recursive parameter estimation for hydrologic models

    Water Resour. Res.

    (2001)
  • J.A. Vrugt et al.

    Toward improved identifiability of hydrologic model parameters: the information content of experimental data

    Water Resour. Res.

    (2002)
  • J.A. Vrugt et al.

    A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters

    Water Resour. Res.

    (2003)
  • X. Yin et al.

    Evaluation of conditional non-linear optimal perturbation obtained by an ensemble-based approach using the Lorenz-63 model

    Tellus, Ser. A Dyn. Meteorol. Oceanol.

    (2014)
  • S. Yuan et al.

    CNOP-P-based parameter sensitivity for double-gyre variation in ROMS with simulated annealing algorithm

    J. Oceanol. Limnol.

    (2019)
  • M. Mu et al.

    An extension of conditional nonlinear optimal perturbation approach and its applications

    Nonlinear Process. Geophys.

    (2010)
  • S. Kirkpatrick et al.

    Optimization by simulated annealing

    Science

    (1983)
  • E. Aarts et al.

    Statistical cooling: a general approach to combinatorial optimization problems

    Philips J. Res.

    (1985)
  • A.I.F. Vaz et al.

    A particle swarm pattern search method for bound constrained global optimization

    Int. J. Comput. Math.

    (2007)
  • A.I.F. Vaz et al.

    PSwarm: a hybrid solver for linearly constrained global derivative-free optimization

    Optim. Methods Softw.

    (2009)
  • R. Storn et al.

    Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces

    J. Glob. Optim.

    (1997)
  • R. Hooke et al.

    “Direct Search” solution of numerical and statistical problems

    J. ACM

    (1961)
  • Cited by (5)

    • Data assimilation for the two-dimensional shallow water equations: Optimal initial conditions for tsunami modelling

      2022, Ocean Modelling
      Citation Excerpt :

      Therefore, data assimilation with a global optimisation algorithm is outside the scope of the current work. However, realistic extensions of our results may benefit greatly from the automatic data assimilation methods provided by Ferreiro-Ferreiro et al. (2020). In conclusion, we have developed a two-dimensional variational data assimilation algorithm for reconstruction of initial conditions of surface waves, with the primary aim of extending the results of the one-dimensional variational assimilation outlined in Kevlahan et al. (2019).

    View full text