Differential Polynomials and Strongly Normal Extensions

Free download. Book file PDF easily for everyone and every device. You can download and read online Differential Polynomials and Strongly Normal Extensions file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Differential Polynomials and Strongly Normal Extensions book. Happy reading Differential Polynomials and Strongly Normal Extensions Bookeveryone. Download file Free Book PDF Differential Polynomials and Strongly Normal Extensions at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Differential Polynomials and Strongly Normal Extensions Pocket Guide.

Important limits. Numerical series; convergence and properties. Series of positive numbers. Comparison test, ratio test, nth root test. Absolute and non-absolut convergent series. Alternating series, Leibniz series. Estimations for series. Product of series. Theorem of Mertens and Abel. Real functions. Limits and continuity. Continous functions on bounded closed intervals. Theorems of Bolzano and Weiersrass. Uniformly continous functions, Heine's theorem. Properties of derivatives. Inverse functions. Higher derivatives. Mean value theorems.

Elementary functions. Polinomials, exponential, logarithm, trigonometric functions.

Department of Mathematics at Columbia University - What PhD Graduate Students are Assumed to Know

Function tests, sketching the graphs of functions. Taylor polinomial. Indefinite integral antiderivatives. Techniqus of integraton. Integration by parts, substitutions, trigonometric integrals, partial fractions. Riemann integral. Propertiesof the integral, upper, lower sums and oscillation sums. Connection with the derivative, Newton-Leibniz rule. Applications of the integral. Mean value theorem. Improper integral. Literature: — P. Lax, M. Terrell: Calculus with applications. Lang: A first course in calculus. Fundamental Theorem of Arithmetic. Linear Diophantine equations, modular arithmetic, complete and reduced remainder systems, solution of linear congruences.

Complex numbers, algebraic and trigonometric forms, Binomial Theorem. Roots of unity, primitive roots of unity. Polynomials with one variable, operations, Horner-scheme, rational root test, Fundamental Theorem of Algebra. Multivariate polynomials, complete and elementary symmetric polynomials, Viete formulas, roots of cubic polynomials. Systems of linear equations in two and three variables, Gaussian and Gauss-Jordan elimination. Linear combinations, linear independence, spanned subspace, basis, dimension. Coordinate systems, row space, column space, nullspace of a matrix.

Subspace of solutions, solutions in the row space. Matrix operations, inverse matrix, base change matrix. Operations with special matrices, PLU decomposition. Solution of systems of equations with the help of PLU decomposition. Determinant as the volume of the parallelepiped. Basic properties, determinant of a matrix. Basic properties of matrix rank.

Linear maps and their matrices: the matirx of a projection to a subspace. Similar matrices. Optimal solution of inconsistent systems of linear equations, normal equation, solution in the row space and its minimality. Moore-Penrose generalized inverse. Literature: — W. Sierpinski: Elementary theory of numbers, North Holland, Halmos: Finite dimensional vector spaces, Springer, Euclidean plane: Geometric transformations, synthetically. Euclidean space: Geometric transformations congruences , analytically. Homogeneous coordinates, uniform treatment of geometric transformations. Affinities, similarities.

Definition of polyhedra, Euler theorem. Cauchy's rigidity theorem, and other interesting polyhedra. Literature: — G. Jennings: Modern geometry with applications, Springer-Verlag. Ferenc Wettl Descripton: The aim of the course is to study the basic notions of information technology.

Basics of hardware CPU, memory, mass storage, Basics of operating systems: program, process, file, folder, file system of Linux and Windows bash, mc, Windows Total Commander. Graphic user interface, terminal user interface, bash language. Internet, network, IP address, wifi, Internet security.

Data on machine: number representation, character encodings. Computer algebra, symbolic calculation Sage, Mathematica, Programming paradigms in computer algebra languages. HTML, the markup language concept, homepage. CSS, separation of the content and presentation. Presentation of math beamer. Basic concepts of graphic file formats, graphics in mathematical text TikZ. Sequences in normed vector spaces, convergence. Theorems of Bolzano and Weierstrass.

Multivariable calculus. Partial derivatives, directional derivatives. Differentiability and the chain rule. The differential of a function and its geometrical meaning, linear approximation. Tangent plane and the gradient. Schwarz's theorem. Extremas of multivariable functions. Absolute minima and maxima. Maxima and minima with subsidiary conditions, Lagrange's method of undetermined multipliers. Inverse and imlicit functions. Multiple integrals, fundamental rules. Jordan-measurable sets and their measure. Double integrals, polar transform. Integrals over regions in three and more dimensions.

Transformations of multiple integrals. Vector fields and their analysis. Differential calculus of vector fields. Curves and surfaces in three dimension. Line integrals of vector fields. The fundamental theorem of line integrals, independence of path. Potential function. Green's theorem. The Curl and Divergence of a vector field. Parametric surfaces and their areas.

Oriented surfaces. Surface integrals of vector fields. Stokes' theorem. The divergence theorem. Sequences and series of functions. Pointwise and uniform convergence. Weierstrass M-test. Consequences of uniform convergence. Power series. Taylor series, binomial series. Fourier series. Inner products on periodic functions. The Fourier and Plancherel theorem. Periodic convolution. Literature: — S. Lang: Undergraduate Analysis. Stein, R. Shakarchi: Fourier Analysis, An Introduction. Orthogonal and orthonormal bases, Gram-Schmidt ortogonatization process, orthogonal matrices, orthogonal transformations.

Householder reflections, Givens rotations. The existence of QR decomposition and its calculation. Optimal solution of systems of linear equations with the help of QR decomposition. Unitary, normal and selfadjoint matrices and transformations. Eigenvalues, eigenvectors and eigenspaces of matrices and linear transformations.

Characteristic equation, solution of the eigenvalue problem. Algebraic and geometric multiplicity, eigenvalues of special matrices, eigenvalues of similar matrices. Cayley-Hamilton Theorem. Diagonilizability of matrices and its equivalent formulations, real and complex cases , diagonalizibility of special matrices, relation to the eigenvalues. Unitary and orthogonal diagonalizibility. Schur decomposition, spectral decomposition. Bilinear functions, standard form, signature, Main Axis Theorem.

Quandratic forms, definity. Classification of local extrema of a function, geometric applications, graphical presentation. Multilinear functions and maps, total derivative as multilinear map, multivariate Taylor formula, determinant as multilinear function. Normal forms of matrices, existence, unicity, determination of the normal form. Generalized eigenvectors, Jordan chain, Jordan basis. Norms of real and complex vectors, matrix norms, basic properties, calculation of norms.

Matrix functions convergence just mentioned, and illustrated , matrix exponential functions.

Vector spaces over arbitrary fields. Existence of basis, dimension, infinite dimensional vector spaces e. Notion of Euclidean space, properties, isomorphism between Euclidean spaces. Dual space. Application of vector spaces over finite fields in coding theory, cryptography and combinatorics. Literature: — C. Significant methods for enumeration, pigeonhole principle and the sieve. Basic Graph Theoretical notions vertex, edge, degree, isomorphism, path, cycle, connectivity. Kruskal's greedy algorithm. Characterization of bipartite graphs.

Network flows, the Ford-Fulkerson algorithm, Edmonds-Karp algorithm. Menger's theorems, higher vertex and edge connectivity of graphs, Dirac's theorem. Euler's result on Eulerian tours and trails. Hamiltonian cycles and paths, necessary condition for the existence. Diestel: Graph Theory, online available. Murty: Graph Theory with Applications.

Literature: — M. Ferenc Wettl Preliminary requirement: Informatics 1 Descripton: The course aims to learn the programming through understanding the Python language. Introduction to programming and Python language, data types, expressions, input, output. Control structures: if, while. Flowchart, structogram, Jackson figures.

Complex control structures. Fundamental algorithms sum, selection, search extrema, decision For cycle. Newer algorithms sorting, splitting into two lists Exception handling. Function call process, parameters, local variables, passing by value. OOP concepts: object, method. File management. Command-line arguments. Recursion painting of an area, building a labyrinth. Algorithms efficiency, quick sorting, binary search versus linear search, O n.

Data structures: binary tree algorithms , effectiveness: search trees Morse tree. Mathematical libraries. Topology of metric spaces. Basic properties of metric and normed spaces. Metric subspaces and isometrics. Sequences in metric spaces. Convergence of sequences in metric spaces. Separable metric spaces. Convergent sequences in normed spaces.

Product of metric and normed spaces. Compact sets, relative compact sets and their its basic properties in metric spaces. Characterization of compact metric spaces. Cantor's intersection theorem. Bolzano-Weierstrass theorem. Product of compact metric spaces. Equivalence of norms in finite-dimensional vector spaces.

Limit of functions in metric spaces. Definition of continuity in terms of epsilon-delta and limits, and their equivalence. Topological characterization of continuity. Uniform continuity. Basic properties of continuous functions on compact spaces. Weierstrass's maximum-minimum principle. Characterization of compact sets in finite-dimensional normed spaces. Fundamental theorem of algebra.

Approximation by Bernstein polynomials. Complete metric spaces. Contractions and Banach fixed point theorem in metric spaces. Totally bounded metric spaces and the Hausdorff characterisation theorem. Completeness of finite-dimensional normed spaces. Connected and path-connected metric spaces. Nowhere dense sets and Baire's category theorem. Banach spaces. Characterization of Banach spaces with absolutely convergent series.

Linear and multi-linear maps between normed spaces and their continuity and norm. The normed space of linear and multi-linear maps between normed spaces.


  • Computing Approximate Greatest Common Right Divisors of Differential Polynomials.
  • Belize (Country Travel Guide);
  • Numerical Techniques for Fluid Dynamics Problems in the Presence of Uncertainties.
  • Sparse Differential Resultant for Laurent Differential Polynomials.
  • Rewriting the Thirties: Modernism and After (Longman Studies In Twentieth Century Literature)?
  • Ansible Configuration Management.
  • Georgian Mathematical Journal.

As such, the construction of the polynomial basis and computation of the expansion coefficients are usually carried out numerically in practice, which leads to additional sources of error. The choice of also plays an important role in PCE performance because directly controls the number of coefficients that must be estimated. Larger P values require more computational effort and are more susceptible to numerical sources of error.

An overview of state-of-the-art methods for addressing these challenges is provided next. The complexity of determining the polynomials depends fully on the structure of the PDF f X. Whenever the uncertain parameters are statistically independent, then 5 reduces to the tensor product of M univariate polynomials that are orthonormal with respect to each marginal density. These polynomials have been analytically derived for many common PDFs [ 17 ], and can be found numerically for generic PDFs using algorithms in terms of the three-term recurrence relationship for orthogonal polynomials [ 33 ].

There are two main approaches for handling the more general case that X has statistically dependent or correlated elements. The first approach involves transforming the generic random vector X into a standard random vector Z for which it is simpler to build the polynomial basis functions [ 34 ]. Any isoprobabilistic transformation that preserves the PDFs of these random vectors can be utilized, though the most commonly used is the Rosenblatt transformation [ 35 ]. The second approach involves applying a more sophisticated numerical procedure that is able to impose the conditions in 5 simultaneously in M dimensions.

This includes the Gram-Schmidt process [ 36 ] as well as the modified Cholesky decomposition of the Gram moment matrix [ 37 , 38 ]. We denote the approximate PCE with numerically estimated coefficients as follows 8 A variety of methods have been proposed for estimating the coefficients that can be broadly categorized as intrusive e. Here, we focus exclusively on non-intrusive methods. These samples can be chosen in various ways including Monte Carlo sampling, quasi-random samples derived from Sobol or Halton sequences, or sparse grids to name a few [ 40 ].

The computational model is then evaluated at every point in the ED, i. We will focus on regression methods due to their flexibility when it comes to enforcing sparsity. In the regression approach, coefficients are defined as those that minimize the least-square residual of the polynomial approximation over the ED 9 where is the model matrix that contains the values of all polynomial basis functions evaluated at all ED points. Since every sample requires an expensive DFBA simulation here, the truncation scheme plays a central role in reducing the complexity of surrogate model construction.

The total degree method is the most commonly used approach for specifying , which looks to keep all polynomials up to a specified order p in the series. Due to the sharp increase in P as the polynomial order increases, the total degree truncation scheme can quickly lead to a prohibitive number of model evaluations, especially in high dimensions. This issue is often termed the curse-of-dimensionality , which is known to considerably limit standard PCE methods. We look to take advantage of two approaches for overcoming the curse-of-dimensionality limitation.

Lower values for q limit the number of high-order interaction terms considered, which directly lead to sparser solutions. In this work, we use the hybrid least angle regression LAR method to solve the regularized version of 9. LAR is an efficient procedure for variable selection, which aims to select the predictors i. Hybrid LAR is a variant of the original LAR that uses a modified cross-validation scheme to estimate the approximation error [ 19 ]. This modification relies on only a single call to the LAR procedure, which provides significant savings in computational cost when compared to the original method.

Provided a sensible sampling strategy has been chosen, the remaining parameters that must be selected are related to truncation p and q and the ED size N.

As discussed in [ 19 ], a basis-adaptive strategy can help overcome potential limitations of an a priori fixed truncation set by letting the maximum degree be driven directly from the data. These steps are repeated for incremented values of p and q , and the algorithm returns the PCE model with the lowest error. Early stop criteria can easily be introduced to avoid an excessive number of iterations. However, when dealing with computationally expensive models, the number of model evaluations N dominates the cost of construction of the surrogate model. This sequential ED strategy can be summarized as Initialize the current ED with a relatively small number of samples N init.

Otherwise, enrich the current ED with N add more samples and return to Step 2. Note that any method can be used in the training step of this algorithm. Thus, in the proposed nsPCE method, the desired accuracy level is the key parameter that must chosen by the user. The PCE method is guaranteed to converge as both the number of model evaluations N and number of terms in the expansion P increase; however, the rate of convergence can be very slow whenever exhibits any singularities [ 24 ].

This is a primary challenge in DFBA models since they can lose differentiability when a switch in the active set of the FBA problem 2 occurs. This implies that the same strategies discussed above for building the polynomials, estimating the coefficients using regularized least squares, truncating the expansion, and sequentially populating the ED can be utilized locally within each element.

The remaining unanswered question is how to design the elements to limit the growth in the number of model evaluations since N will scale approximately linearly with N e. The best decomposition should ensure that the model response behaves smoothly in every element. The proposed nsPCE method decomposes the support into two elements S 1 and S 2 that denote, respectively, the set of parameters for which the singularity has not and has occurred. This idea is best illustrated through a simple example. At any given time of interest t , the two elements can be defined in terms of t s x as follows 15 Let us briefly analyze the behavior of these elements.

The elements are continuous functions of time, meaning that every time of interest t requires a different decomposition.

Sparse Differential Resultant for Laurent Differential Polynomials

Whenever t is outside of the support of t s X , then one of these sets is empty and we revert back to traditional PCE that covers the full support S. When multiple non-overlapping singularities are present, we must simply find the support in which t lies and define the two elements using that corresponding boundary function. The case of overlapping supports is more challenging due to the fact that more elements would need to be created based on the intersection of S 1 and S 2 for all active singularities. For the simple scalar example in 14 , we can analytically derive the boundary function; however, this is not generally possible in DFBA models.

Based on the observation that the singularity boundary depends smoothly on the parameters, we instead propose to construct a sparse PCE model to approximate the boundary in multiple dimensions, i. Notice that the full DFBA model must be integrated when constructing. Instead of discarding this information, it can be reused by storing the list of state and time points generated when integrating the DFBA model and then interpolating these points when calculating the model response function.

Thus, we can use this approach to initialize the ED , model response data , and singularity time data. Using along with the set definitions in 15 , we can easily partition and into the required local EDs. The sequential ED strategy is then applied in each element to ensure that the target error is achieved. A flowchart that summarizes the main steps of the nsPCE method is shown in Fig 1.

By evaluating the nsPCE surrogate in 16 , which is much cheaper to evaluate than the full model , on a collection of Monte Carlo samples of the parameters, we can directly approximate statistical properties of Y including moments, parametric sensitivities, or even its full distribution. The model response function can be freely chosen by the user. The singularity time function should be specified implicitly as a function of the DFBA model states.

This function can be identified by simulating the DFBA model with nominal parameters and locating at which time points a switch in the active set of the FBA solution occurs. The PCE coefficients are fit using the basis-adaptive version of the hybrid LAR method, while the ED is sequentially enriched to ensure that the target accuracy level is achieved. The complete set of Matlab scripts that implement the nsPCE method is available at [ 29 ]. It is important to note that the scripts require the installation of two additional packages that integrate the DFBA model and construct sparse PCE models.

We opt for DFBAlab in this work due to certain numerical advantages that it exhibits over the available alternatives see [ 27 , 31 ] for more details. Hence, some modifications to the source code may be needed to perform the same operations with other toolboxes. We present two separate case studies in this section. The first case study explores Bayesian estimation of six parameters related to the substrate uptake kinetics in a computationally expensive DFBA model of E. The goal of the first case study is to demonstrate advantages of the proposed nsPCE method over alternatives as well as its application to a realistic problem that has been previously studied in the literature.

The second case study focuses on maximum a posteriori estimation in a synthetic DFBA problem with a relatively large number of parameters, i. The goal of the second case study is to provide preliminary evidence of the scalability of nsPCE as well as the fact that the method is applicable to a wide-variety of UQ applications. Here, we focus on the initial phase of batch operation of the E. No ethanol production is observed under aerobic conditions i. This case study is commonly used as a benchmark for comparing DFBA solvers see, e.

The dynamic mass balance equations of the form 1 for the extracelluar environment can be summarized as follows 18 where b t , g t , and z t denote the biomass, glucose, and xlyose concentrations at time t , respectively. The uptake kinetics for glucose, xylose, and oxygen are given by Michaelis-Menten kinetics 19 where parameters u g , max , u z , max , u o , max , K g , K z , K o , and K ig correspond to the maximum substrate uptake rates, saturation constants, and inhibition constants. It is assumed that the reactor oxygen concentration, o t , is controlled and is therefore constant.

Mathematics

The chosen metabolic network reconstruction was iJR [ 28 ], which contains reactions and metabolites. The cells are assumed to maximize growth, implying 2 is an LP of the form 20 where c is a vector of weights that represent the contribution of each flux to biomass formation while , , and are, respectively, the exchange fluxes for glucose, xylose, and oxygen i. Thus, the metabolic network interacts with the extracellular environment through the exchange fluxes in The initial conditions of the batch are assumed to be fixed at 0.

However, the parameters in the substrate uptake rates 19 should be fit to experimental data since they cannot be easily predicted from first principles. This problem of identifying the model parameters was partially tackled in [ 8 ], where most of the parameters were fixed according to estimates provided in the literature while u z , max and K ig were adjusted by trial-and-error to match transient measurements of biomass, glucose, and xylose. The reported parameter estimates are given in S1 Table.

Since o t is fixed, u o , max and K o can be lumped into a single parameter u o. We selected this range to reflect a reasonable level of confidence in the reported literature values. In the following, we demonstrate how the proposed nsPCE surrogate modeling method can facilitate UQ tasks that are otherwise computationally intractable with respect to the full DFBA model. Before selecting the element decomposition, we must first simulate the DFBA model to locate any significant singularities. The extracellular glucose, xylose, and biomass concentration profiles are plotted in Fig 2 for one hundred randomly sampled parameter values.

For a given realization of the parameter, the full simulation requires approximately 1. The genome-scale model is integrated from 0 to 8. The consumption of xylose only occurs after glucose is fully exhausted, which is a strong function of the parameters. At the start of the batch, glucose is consumed preferentially over xylose. Once glucose has been depleted, the LP solution switches and xylose becomes the main carbon source. The final batch time is then specified as the time that both glucose and xylose have been fully depleted, at which point the LP becomes infeasible and the solution ceases to exist.

The E. Although physically the cells would begin to die in this situation, DFBA models cannot directly predict the cell death phase and thus we assume the biomass remains constant for simplicity. The time-to-consumption of glucose t g and xylose t z represent the two singularities in this problem, and clearly depend on the value of the model parameters. Since the singularity time functions cannot be derived analytically, we look to construct PCE approximations for both t g and t z. The experimental designs EDs are generated using Monte Carlo MC sampling with a fixed random seed to ensure repeatable results.

Fig 3a and 3b show the RMSE as a function of the number of model evaluations used to fit surrogate models for t g and t z , respectively. RMSE versus the number of model evaluations i. The sparse PCE surrogate models for t g and t z are used in the nsPCE method to build surrogates for the extracellular concentrations. Additionally, these surrogate models contain useful information on which parameters influence the consumption of different substrates.

It is interesting to note that u g , max and u o mainly contribute to the variance of t g X , while u g , max , u z , max , and u o are the significant contributors to the variance of t z X. The estimated global sensitivity indices of a t g and b t z with respect to the uncertain parameters. From the estimated PDFs, we find that t g X ranges from approximately 6. However, for times outside of this window, we can exclusively define the elements of the parameter space in terms of t g for times before 7. Plots of these two regions at times 6.

The blue region represents S 1 while the red region represents S 2. The SVM model is unable to capture the significant nonlinear behavior of the boundary as it evolves over time. Thus, SVM results in relatively large misclassification errors due to the limited training data. The sparse PCE model, however, is able to accurately represent the t g function over the full support see the parity plot in Fig 5d , which leads to a much more accurate representation of these two elements using limited data.

The decomposition of the parameter support into two non-overlapping elements at a 6. Ideally, these additional model evaluations could be avoided by directly estimating the RMSE from the ED either empirically or using cross-validation techniques. The empirical estimate of the RMSE is based on sample-based approximations to the integral expressions for mean and variance. Cross-validation obtains a more robust RMSE estimate by splitting the ED into various training and validation sets, fitting different models with each training set, and averaging the prediction error of each model.

We have verified that the PCE surrogate models are able to accurately represent the singularity manifold that leads to non-smooth behavior in the states of the DFBA model. Thus, they can be used to build nsPCE surrogates for the extracellular concentrations based on the algorithm summarized in Fig 1. We choose three quantities of interest for illustrative purposes: glucose at time 7. In global PCE, a single surrogate model is constructed over the full parameter support, while nsPCE systematically breaks down the support into two disjoint elements using the singularity time function as a dividing boundary.

In addition, the ED in both approaches are sequentially enriched using MC sampling with a fixed random seed. To simplify the construction of the polynomial basis functions when training the nsPCE surrogate models, the elements S 1 and S 2 were outerbounded with hyper-rectangles. However, only parameter values that explicitly fall within these sets are incorporated into the local ED.

This simple approach for dealing with elements of any shape is currently used in the provided scripts [ 29 ], but other ways of dealing with generic elements can also be explored. The convergence properties of the nsPCE surrogate models for the three quantities of interest are compared to that of global PCE in Fig 6. The nsPCE surrogates achieve significantly lower RMSE than the global PCE surrogates in virtually all cases considered, while requiring many fewer samples to converge to the target error level.

In addition, global PCE saturates at the maximum number of ED samples for all three quantities of interest. This implies that global PCE is unable to achieve the desired accuracy levels, whereas nsPCE only saturates for the lowest target error of xylose. This behavior is expected since the convergence rate of global PCE is known to be substantially lowered whenever singularities are present in the model response function. Thus, nsPCE is able to significantly improve the rate of convergence based on a properly chosen elemental decomposition of the parameter support.

To show that lower target error levels translate to improved predictions, parity plots for the three quantities of interest are shown in Fig 7. This is highly undesirable when using the PCE to predict specific response values, as opposed to predicting statistical quantities that average over the response values where individual points are not as important. The nsPCE surrogate models clearly mitigate this limitation of global PCE in a significant way since there are no outlier predictions in the set of 10, validation points.

Left plots show the validation RMSE versus the specified error tolerance. Right plots show the total number of model evaluations based on a sequential ED construction, with a maximum of samples allowed. The global sparse basis-adaptive PCE results are also shown for comparison purposes. The left, middle, and right columns correspond to glucose concentration at 7 hours, xylose concentration at 8 hours, and biomass concentration at 8 hours, respectively.

The parity plots for global sparse basis-adaptive PCE are overlaid for comparison purposes. Here, we focus on the inverse UQ problem of estimating parameters from data, which can be greatly accelerated using nsPCE. The measurements are corrupted with noise 21 where and are, respectively, vectors of the measured data and noise at the i th time point.

As Bayesian inference looks to characterize the full posterior density, it directly provides an explicit representation of the uncertainty in the parameter estimates. The prior and likelihood function must be specified before solving We assume the same uniform priors as those used to construct the nsPCE surrogate models, though these can differ in general. The likelihood function describes the discrepancy between the observed data and the model predictions in a probabilistic way.

The likelihood function is specified by the data and noise models in 21 and 22 , and is given by 24 Although we use a Gaussian likelihood here, the same Bayesian estimation approach can be applied to any choice of likelihood function and thus can be easily modified to incorporate other potentially important factors including sensor bias or asymmetric noise. Since 23 cannot be solved analytically, we must resort to sample-based approximations that rely on generating samples from the target posterior distribution.

The proposed surrogate models can be used to accelerate any sampling-based method; however, we focus on SMC since this is a class of algorithms that can be made fully parallelized. SMC is based on the concept of importance sampling , which can be implemented in an iterative fashion such that the posterior is updated every time a new measurement becomes available.

Resampling: resample for particles with equal weights. When the algorithm stops at time k f , the set of N p particles targets the posterior distribution of interest. We use systematic resampling in Step 3 due to its computational simplicity and good empirical performance, though a variety of other methods are available [ 50 ]. Step 2 is usually the computational bottleneck because the model must be repeatedly solved in order to evaluate the likelihood weight factors using We must then construct a total of 24 surrogates before running the SMC algorithm.

The same basic strategy described in the previous section is used for constructing all 24 of the nsPCE surrogate models. Similarly to how the samples for the singularity time are used to initialize the ED in each element, we can store the list of state and time points generated when integrating the DFBA model and interpolate these points to calculate the extracellular concentrations at every time point of interest.

By keeping a working ED that is used to initialize each element at every time point, we can greatly limit the number of expensive DFBA simulations that represent the computational bottleneck in SMC. The basis-adaptive hybrid LAR method consistently estimated coefficients in less than 30 seconds, verifying that the DFBA simulations are the dominant cost in this case study.

To verify that the SMC algorithm approximately converged with this many particles, we performed 10 separate bootstrap runs that produced a set of very similar posterior densities. Note that a discussion on challenges and open issues in Bayesian estimation is provided in the Discussion section. The diagonal subplots represent marginal densities while the off-diagonal subplots represent two-dimensional projections of samples from the joint density. Blue denotes the posterior density while green denotes the prior density.

The red line represents the true parameter values used to generate synthetic data for estimation purposes. The estimated posterior density in Fig 8 provides interesting physical insights. Three of the parameters K g , K z , K ig are unobservable with the current data set since their posterior blue and prior green densities are equivalent.

Visual Group Theory, Lecture 6.1: Fields and their extensions

This observation could not be easily made before running the estimation procedure due to the nonlinear and indirect relationship between D and X. A change in the experimental conditions such as the initial conditions, controlled oxygen concentration, or substrate feed profiles can enhance the sensitivity of the data to parameters K g , K z , K ig. Although the data is sensitive to u g , max , u z , max , u o , these parameters are highly correlated as seen in the off-diagonal plots of their joint densities in Fig 8. Thus, the currently available data from one single batch is insufficient for accurately estimating all the parameters of interest.

The evolution of the marginal posterior densities of the observable parameters over time is shown in Fig 9. Since glucose is mostly consumed by 7. The density of u z , max , however, is constant before 7. Each subplot shows the histogram of parameter posterior samples estimated using the sequential Monte Carlo method. The x -axis represents the range of values of the parameters and the y -axis represents frequencies. The red line represents the true parameter values. Let denote the vector of all model responses.

The forward UQ problem looks to characterize the uncertainty in the model predictions by propagating uncertainty in the parameters through. This can involve estimating either the prior predictive distribution f Y before any data has been collected , or the posterior predictive distribution f Y D after data has been obtained. The only difference between these two problems is that is evaluated at i.

As expected, the prior predictive distributions are much wider than the posterior predictive distributions, indicating there is significant uncertainty in the predictions before incorporating data.

mathematics and statistics online

In addition, we see that many of these distributions have sharp changes and long tails due to the non-smooth behavior of the model responses, which can be accurately captured with the proposed nsPCE framework. It is also interesting to note that the posterior predictive distributions have low variance, even though the parameters are not perfectly estimated.

This highlights the impact that nonlinearity can have on both estimation and uncertainty propagation. Each subplot shows the histogram of samples of the model output obtained by substituting i. The x -axis represents the range of values of the model outputs and the y -axis represents frequencies. This case study is based on a synthetic metabolic network originally introduced in [ 31 , Chapter 8]. The goal of this case study is to show that the proposed nsPCE method can be applied to problems with a larger number of parameters as well as alternative UQ approaches.

With a sufficiently good initial approximation, Newton iteration is shown to converge quadratically to an optimal solution. Finally, sufficient conditions for existence of a solution to the global problem are presented along with examples demonstrating that no solution exists when these conditions are not satisfied. The authors would like to thank George Labahn for his comments. The authors would also like to thank the two anonymous referees for their careful reading and comments.

Skip to main content. Advertisement Hide. Foundations of Computational Mathematics pp 1—36 Cite as. Article First Online: 13 June This is a preview of subscription content, log in to check access. Acknowledgements The authors would like to thank George Labahn for his comments. Boyd S, Vandenberghe L Convex optimization.