# 数学代写|优化理论作业代写optimization theory代考|SYSM6305

## 数学代写|优化理论作业代写optimization theory代考|New Fitness Function Considering L2 Regularization

The performance of an EPR procedure mainly depends on fitness function. A widely used fitness function is structural risk minimization (SRM) [3], which involves the addition of model complexity term (size of model) in the empirical error and punishes the model fitness based on its size. Another problem is that a relatively small amount of data will increase the risk to cause the model overfitting, making the training error small and the testing error particularly large, which would weaken the generalization ability of an EPR model. Then, the use of regularization/penalty functions (e.g., $L_0, L_1$ and $L_2$ regularizations) to avoid overfitting is suggested [4]. Among various regularizations, the $L_2$ regularization is usually adopted [5]. Therefore, a modified mathematical formulation of SRM considering the $L_2$ regularization was adopted in this study, given as:
$$\left.\mathrm{SRM}=\frac{\operatorname{SSE}}{N}\left(1-\sqrt{\left(\frac{n}{N}-\left(\frac{n}{N} \log \left(\frac{n}{N}\right)\right)+\left(\frac{\log \left(\frac{n}{N}\right)}{2 N}\right)\right.}\right)\right)^{-1}+\lambda|\omega|_2^2$$
with
$$\mathrm{SSE}=\sum_{i=1}^N\left(\mathbf{Y}{\mathbf{m}}-\mathbf{Y}{\mathbf{p}}\right)^2 \text { and }|\boldsymbol{\omega}|_2^2=\sum_{j=1}^n \boldsymbol{\omega}j=\boldsymbol{\omega}^{\mathrm{T}} \boldsymbol{\omega}$$ where $N$ is the number of data points on which the SRM is computed; $\mathbf{Y}{\mathrm{m}}$ is the vector of measured values; $\mathbf{Y}_{\mathrm{p}}$ is the vector of predicted values; $\boldsymbol{\omega}$ is the vector of model coefficients; $\lambda$ is regularization parameter that requires manual adjustment to find an appropriate value.

## 数学代写|优化理论作业代写optimization theory代考|Adaptive Selection of Correlating Variables and Term Size

A reliable EPR model should have a reasonable trade-off between predictive ability and generalization ability. As stated by Wood [6], simple yet adequate models are favored on the basis of practicality. Therefore, an EPR procedure combining with model selection process should be proposed to ensure the model “simple” enough based on minimizing the training error. Then the model could also have a good generalization performance (e.g., the testing error is also small). In this case, the model selection involves two aspects: selecting the suitable combination of correlating variables and the appropriate size of terms.

Figure $5.2$ presents the proposed procedure, where $\boldsymbol{\theta}$ is the decision variables corresponding to the exponents of EPR model; Comb represents the number of combination of correlating variables; $m$ is the size of terms. Compared to the common EPR process, two additional variables Comb (an integer number) and $m$ (an integer number) are added to the vector of optimization variable in the proposed procedure. Firstly, all variables in initial generation are generated randomly within their domains. Next, the possible combination of correlating variables is selected according to the value of Comb and then a possible term size is chosen according to the value of $m$. Subsequently, a generated EPR model with unknown coefficients according to Eq. (5.6) is attained. Then, the vector of coefficients $\boldsymbol{a}$ is determined by regression between the measurements and predictions. Finally, the fitness SRM with $L_2$ regularization is computed to evaluate the performance of EPR model, which determines whether the formula can survive to next generation in the DE-evolution.

Once the stop criterion (e.g., the maximum number of generation) is reached, the whole process is exited; otherwise, the process will continue to the next generation.
With increasing the number of generation, the appropriate combination of correlating variables and term size will be automatically selected among numerous calculations. Moreover, through adjusting the regularization parameter, the most appropriate EPR model in terms of model complexity and generalization ability can be finally found.

# 优化理论代考

## 数学代写|优化理论作业代写optimization theory代考|New Fitness Function Considering L2 Regularization

EPR 程序的性能主要取决于适应度函数。一个广泛使用的适应度函数是结构风险最小化 (SRM) [3]，它 涉及在经验误差中添加模型复杂度项（模型的大小），并根据其大小惩䍐模型适应度。另一个问题是， 相对少量的数据会增加导致模型过拟合的风险，使得川训练误差很小，测试误差特别大，这会削弱 EPR 模 型的泛化能力。然后，使用正则化/惩罚函数（例如， $L_0, L_1$ 和 $L_2$ 正则化) 以避免过度拟合[4]。在各种 正则化中， $L_2$ 通常采用正则化[5]。因此，考虑到 SRM 的修正数学公式 $L_2$ 本研究采用正则化，如下所 示:
$$\left.\mathrm{SRM}=\frac{\operatorname{SSE}}{N}\left(1-\sqrt{\left(\frac{n}{N}-\left(\frac{n}{N} \log \left(\frac{n}{N}\right)\right)+\left(\frac{\log \left(\frac{n}{N}\right)}{2 N}\right)\right.}\right)\right)^{-1}+\lambda|\omega|2^2$$ 和 $$\mathrm{SSE}=\sum{i=1}^N(\mathbf{Y} \mathbf{m}-\mathbf{Y} \mathbf{p})^2 \text { and }|\boldsymbol{\omega}|2^2=\sum{j=1}^n \boldsymbol{\omega} j=\boldsymbol{\omega}^{\mathrm{T}} \boldsymbol{\omega}$$

## 数学代写|优化理论作业代写optimization theory代考|Adaptive Selection of Correlating Variables and Term Size

myassignments-help数学代考价格说明

1、客户需提供物理代考的网址，相关账户，以及课程名称，Textbook等相关资料~客服会根据作业数量和持续时间给您定价~使收费透明，让您清楚的知道您的钱花在什么地方。

2、数学代写一般每篇报价约为600—1000rmb，费用根据持续时间、周作业量、成绩要求有所浮动(持续时间越长约便宜、周作业量越多约贵、成绩要求越高越贵)，报价后价格觉得合适，可以先付一周的款，我们帮你试做，满意后再继续，遇到Fail全额退款。

3、myassignments-help公司所有MATH作业代写服务支持付半款，全款，周付款，周付款一方面方便大家查阅自己的分数，一方面也方便大家资金周转，注意:每周固定周一时先预付下周的定金，不付定金不予继续做。物理代写一次性付清打9.5折。

Math作业代写、数学代写常见问题

myassignments-help擅长领域包含但不是全部: