International Journal of Research and Innovation in Applied Science (IJRIAS)

Submission Deadline-12th August 2025
August Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th September 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Numerical Experiment with the Introduction of a Scaling Factor to the Multiplier Method to Solve Constrained Optimization Problems

  • Adebayo Kayode James
  • Dele-Rotimi Adejoke Olumide
  • Adisa Isaac Olabisi
  • Alabi Taiye John
  • Ademoroti Albert Olalekan
  • 1417-1424
  • Aug 21, 2025
  • Mathematics

Numerical Experiment with the Introduction of a Scaling Factor to the Multiplier Method to Solve Constrained Optimization Problems

Adebayo Kayode James1*, Dele-Rotimi Adejoke Olumide2, Adisa Isaac Olabisi3, Alabi Taiye John4, and Ademoroti Albert Olalekan5

1Department of Mathematics, Ekiti State University, Ado Ekiti, Ekiti State, Nigeria

2Department of Mathematics, Bamidele Olumilua University of Education, Science, and Technology, Ikere Ekiti, Ekiti State, Nigeria

3Department of Mathematics, Adeyemi University of Education, Ondo, Ondo State, Nigeria

4Department of Statistics, Kogi State Polytechnic, Lokoja, Kogi State, Nigeria

5Department of Physics, Bamidele Olumilua University of Education, Science, and Technology, Ikere Ekiti, Ekiti State, Nigeria

*Corresponding Author

DOI: https://doi.org/10.51584/IJRIAS.2025.100700128

Received: 30 May 2025; Accepted: 07 June 2025; Published: 21 August 2025

ABSTRACT

This paper discusses the intertwining of the Conjugate Gradient Method (CGM) as a scaling factor for Newton’s algorithm, which is employed to determine the solution of the constrained optimization problems. The constrained problem has to be converted to an unconstrained problem via the multiplier method, after which a scaling is introduced to Conjugate Gradient Method (CGM) of Hestenes and Stiefel that forces the problem to collapse toward the resulting equation quickly. Newton’s algorithm, which was embedded in the algorithm of the multiplier method, was used to carry out the minimization process. At each descent direction search, a formulated dynamic is used to update the Lagrange multiplier, such that this is done at each one-dimensional search until the optimality is reached. The use of this method was not limited to optimization problems alone, it has been extended to optimal control problems of the Lagrange and the Meyer forms, with a huge success recorded when the results obtained were compared with results of other existing methods.

Keywords: Scaling Factor, Multiplier Method, Constrained Optimization, Penalty Parameter, Conjugate Gradient Method (CGM), Penalty Function.

INTRODUCTION

In the study of a collection of techniques that is employed for finding solution to quadratic augmented Lagrangian based on constrained optimization problems, for which the functions of the multipliers hinge on the penalty parameters, this leads to Lagrangian that the multipliers are nonlinear. We equally studied some methods for unconstrained multivariate problems, such as the Conjugate Gradient Algorithm (CGM) and Newton’s method, that are known for their reliability and swift convergence. The conjugate gradient method is preferred to the Newton’s method because it converges very fast in a specified number of iteration than the Newton’s method as supported by [1], but when a problem is subjected to one or more constraints, experience shows that the iterate of the conjugate gradient method may move away from the feasible region in its quest to locate an optimal path in a short time, thereby wasting computer time that is not regarded as a characteristics of a good algorithm according to [2] and [3].

On the other hand, Newton’s method moves slowly in search of the global minimum, thereby generating more iterations. To balance the two sides, a non-differentiable conjugate gradient method of Hestenes and Stiefel is proposed by [4] and [5] as

          (1)

was taken into consideration for the multiplier method that is of the form

       (2)

and Newton’s method, which had been embedded in the multiplier algorithm (2), was considered for the minimization process of (1). Since the conjugate gradient method was used as the scaling factor for the multiplier method, and the algorithm of Newton’s method is used to carry out the minimization process, the three methods shall be briefly discussed in the next sections.

Newton’s method

Suppose  is approximated by a quadratic function at  such that the equation below holds for the function:

      (3)

where  is the Hessian matrix of . The method makes use of the second order (quadratic) approximation of  at  and thus employs second order information about  The method takes into account the curvature of  at  in identifying better search directions than can be obtained via the gradient method as opined by [6]. The minimum of the quadratic approximation of  in (3) to each of the components of  and equating the resulting expression to zero gives:

         (4)

therefore

       (5)

If  is quadratic, only one step is required to reach the minimum of . However, for a general nonlinear objective function, the minimum of  cannot be reached in one step, therefore the Newton’s method has to be modified so that (5) can be adjusted to conform to the form

   (6)

where 

if    then,

     (7)

By introducing the parameter for the step length into (5), then (5) becomes

     (8)

The authors in [7] observed that the direction  is now given by  and the step length  ( is a scalar denoting the distance moved along the search direction) and (8) is applied iteratively until the terminating criteria is/are satisfied [8].

The Conjugate Gradient Method

The Conjugate Gradient Method (CGM) by [9] is one of the variations of the gradient method. In its simplest form, the gradient method uses the iterative scheme:

           (9)

to generate a vectors sequence,  that converges to the minimum of  achieved on minimizing the function

          (10)

The parameter  that appears in (9) is referred to as the step length of the descent direction sequence. In particular, if  is a function on the Hilbert space ℋ such then in the Hilbert space  admits a Taylor series  expansion given as:

         (11)

Where  and is a positive definite matrix that is symmetric and called the linear operator according to [10]. It can be shown by [11] that it possesses a unique minimum  in ℋ, and that the gradient of  at the minimum,  . The CGM algorithm for iteratively locating the minimum  of in ℋ as described by [11] is as follows:

Step 1: The first element  is guessed while the remaining members of the sequence are computed with the aid of the formulae in steps 2 through 6.

Step 2: The descent direction is computed using       (12)

Step 3: Set ; where the step length      (13)

Step 4: Update the gradient value using the dynamic    (14)

Step 5: Set ;        (15)

Step 6: If for some i, then, terminate the sequence; else set i = i +1 and return to step 3.

In the iterative steps 2 through 6 above, denotes the descent direction at the i-th step of the algorithm, is the step length of the descent sequence , and denotes the gradient of  at steps 3, 4, and 5 of the algorithm reveal the crucial role of the linear operator  in determining the step length of the descent sequence and also in generating a conjugate direction of search.

The conjugate gradient method is an iterative method that enjoys the quadratic convergence, ensuring that it converges in at most  iterations if there is no rounding off errors are encountered. Starting with an initial estimates  of the solution , one determines successfully new estimate value for  of .

The estimate  is a value closer to  than  hence, at each step of the iteration, the residual,  is computed. Normally, this vector can be used to determine the usability of the estimate  However, this measure is not a reliable one because it is possible to construct cases in which the squared residual  increases at each step while the length of the error vector,  will decrease, forcing the round-off error to always occur except under very unusual circumstances.

The Multiplier Method 

The method of multipliers or the augmented reduces the possibilities of ill conditioning by introducing explicit Lagrange multiplier estimates into the function to be optimized. In contrast to the penalty function, the multiplier method largely preserves smoothness and guaranteed optimality within a short time without necessarily drag the penalty parameter to infinity [12].

For clarity, let us consider the equality constraint problem below:

Minimize ,

Subject to      (16)

The quadratic penalty function  is set to penalizes the constraint violations by squaring the infeasibilities and scaling them by , however, the approximate minimizers  of  do not quite satisfy the feasible conditions , for   instead, they are perturbed so that

        (17)

just to be sure that  as . To avoid this systematic perturbation, [13] observed that the multiplier function  achieves this goal by including an explicit estimate of the Lagrange multiplier   based on the modified objective function

      (18)

     (19)

By comparing (17) with

        (20)

We have the updated formula

 for    (21)

By rearranging this expression, we have , so we conclude that if   is close to the optimal multiplier vector  [14]. The infeasibility in  will be much smaller than  rather than being proportional to  the relation (21) immediately suggests a formula for improving our current estimate  of the multiplier vector using the approximate minimum  [15], one then can set   for all .

Theorem 1: 

Let  be the local solution of (16) at which the LICQ is satisfied (i.e., the gradients  for  are linearly independent vectors) and the second order conditions are satisfied when . Then, there is a threshold value  such that for all  is a restricted local minimum of  as opined by [16].

Proof

We prove the result by showing that  satisfies the second-order sufficient conditions to be a strict local minimum of  for all  that are sufficiently large, i.e.,

  1.                       (22)
  2.                     (23)

Since  is a local solution of (8) at which the LICQ is satisfied, the sufficient condition can be applied to deduce that   and  so that

      (24)

For the second part of (23), we write

      (25)

where A is the constraint gradient matrix evaluated at . If the claim in (25) is not true, then for each integer , one could choose a vector   with  such that

    (26)

Therefore, as  (2.12) gives rise to

        (27)

Since the vectors  lie within the compact set, they could have an accumulation point , then the limit of (27) implies that . More so, by rearranging (26), one arrives at

      (28)

So, taking the limits of both sides of (28) gives rise to . However, this inequality contradicts the second–order conditions, such that when it is applied to (8), it states that one must have  for all non-zero vector  with . Hence, the second part of (23) holds for all  sufficiently large.

SCALED MULTIPLIER METHOD 

The scaled multiplier method preserved the smoothness of the Newton’s method [17], fast convergence of the Conjugate gradient method [18] and robustness of the Multiplier method [5] to form an algorithm that is free of ill conditioning and does not necessarily drag the penalty parameter to infinity before optimality is guaranteed. Unlike the scaling of the entire process in [19], the SMM is all about introducing the scaling factor into the classical multiplier method.

The method transformed a constrained problem of the type (16) to an unconstrained one of the form (2), the Function (1) of Hestenes and Stiefel [20] as a scaling factor for the function (2). Newton’s method for solving systems of necessary optimality conditions, i.e., Newton’s Method for solving unconstrained optimization problems, was considered for the minimization, and the updating of  was embedded into each iteration of Newton’s Method until some indicated conditions for optimality are satisfied.

The techniques of changing  at each one-dimensional search were tried for many problems with great success. The complete iterative procedure of the method after transformation of the constrained problem to an unconstrained one can be summarized as follows:

(i). Scale the multiplier method to conform with the function , where   is the function value of at  is a positive definite symmetric linear operator, while  is in Hilbert space.

(ii). Knowing the initial guess, choose  as the gradient, where  is a function of  and 

(iii) Compute 

(iv) Update 

(v) Use the formula , to update the value of  where constraint equation(s) is given as c.

Steps (i) through (v) have to be repeated until either of the specified condition are met, .

Data and Analysis

Having itemized the algorithm of the Scaled Multiplier method, we now apply the method to the following constrained optimization problems:

Problem 1:

Minimized 

subject to 

Problem 2:

Minimized 

subject to 

Problem 3:

Minimized 

subject to 

Comparison of the Numerical Results

To ensure a wide range of comparison, the Scaled Multiplier Method (SMM) will be compared with other methods like Penalty Function Method (PFM), Lagrange Multiplier (LM), and ERKM in [21] with ERKM is close to SMM because of its modification but SMM is doing credibly well due to the scaling factor. The comparison will only reflect the iteration at which the result of each of the methods coincide and the values of   , and  at that iteration.

Table 1: Table of results for Problem 1

Methods Iterations

 and 

Table 2:

Methods Iterations

 and 

Table 3:

Methods Iterations

 and 

CONCLUSION

Unlike using the ERKM, PFM, and LM, the SMM behaves relatively well in terms of fewer iterations to reach the convergence point, relatively very low penalty parameter,  gives one the desired result against situations where large penalty function values have to be used in PFM; however, the PFM has to go through many iterations before its result coincides or comes close to the existing results.

This paper clearly shows the derivation and numerical implementation of the scaling factor in the multiplier method, which interprets the new technique for its easy applicability to solve constrained optimization problems. In the near future, one hopes to devote more attention to the method’s application to solve optimization problems with equality and inequality constraints.

REFERENCES

  1. Nash, S. G. and Sofer, A., (1995), Linear and Nonlinear programming, McGraw-Hill, New York, 1995.
  2. Boggs, P. T., Kearsley, A. J., and Tolle, J. W., (1999), Practical algorithm for general large- scale Nonlinear optimization problems, SIAM J. Opt. 9(3), 755-778.
  3. Nocedal, J. and Wright S. J., (1999), Numerical Optimization, Dover Publishing Company, New York, (U.S.A).
  4. Brent R. P., (1973), Algorithm for minimization without derivatives Prentice-Hall, Englewood cliffs, New Jersey.
  5. Hestenes, R. T., (1969), Multiplier and gradient method, Journal of optimization theory and applications, Vol.4, No5, 346-463.
  6. Rockafella, R. T., (2005), Multiplier method of Hestenes and Powell applied to convex programming, Journal of   Optimization theory and applications, Vol.4, No. 4.
  7. Aderibigbe, F. M., Dele-Rotimi, A. O., and Kayode James Adebayo, (2015), On Application of a New Quasi-Newton Algorithm for Solving Optimal Control Problems. Pure and Applied Mathematics Journal. Vol. 4, No. 2, pp. 52-56. doi: 10.11648/j.pamj.20150402.14.
  8. Schuidy, S. B., (1992), A method of multiplier for Mathematical programming problems with equality and Inequality   constraints, Journal of optimization theory and applications, Vol. 4, No 5.
  9. Oke, M. O., Oyelade, T. A., Adebayo, K. J., and Adenipekun, E. A., (2021), On a Modified Higher Order Form of Conjugate Gradient Method for Solving Some Optimal Control Problem, Global Science Journal, Volume 9, Issue 6, pp 244-251.
  10. Adebayo, K. J., Aderibigbe, F. M., Ayinde, S. O., Olaosebikan, T. E., Adisa, I. O., Akinmuyise, F. M., Gbenro, S. O., Obayomi, A. A., and Dele-Rotimi, A. O., (2024), On the Introduction of a Constructed Operator to an Extended Conjugate Gradient Method (ECGM) Algorithm, Journal of Xi’an Shiyou University, Natural Science Edition, Vol., 20 (04), pp. 562-570.
  11. Hasdorff, L., (1976), Gradient Optimization and Nonlinear Control, J. Wiley and Sons, New York.
  12. Tripathi, S. S. and Narendra, K. S., (1972), Constrained optimization using multiplier method, Journal of optimization theory and applications, vol. 9, No1, 59-63.
  13. Olorunsola, S. A., Olaosebikan, T. E., and Adebayo, K. J., (2014), Numerical Experiments with the Lagrange Multiplier and Conjugate Gradient Methods (ILMCGM). American Journal of Applied Mathematics. Vol. 2, No. 6, pp. 221-226. doi: 10.11648/j.ajam.20140206.15
  14. Conn, A. R., Gould, N. I. M., and Toint, ph. L., (1991), A global convergent Augmented Lagrangian Algorithm for Optimization with general constraints and simple bounds, SIAM J. Num. Anal. Vol. 28, No.2, 545-572.
  15. Snyman, J. A., (2005), Practical Mathematical Optimization, Prentice-Hall, Englewood cliffs, New Jersey.
  16. Yamamoto, T., (2000), Optimization and Nonlinear equations, Journal of computational and applied Mathematics 124, 9-19.
  17. Thomas, E. E., Himmelblau, D. M., and Lasdon, L. S., (2001), Optimization of chemical process, McGraw-Hill, New York.
  18. Powell, M. J. D., (1969), A method for nonlinear constraint in minimization problems (R. Fletcher, ed), Academic press, New York.
  19. Adebayo, K. J., Ademoroti, A. O., Ayinde, S. O., Akinmuyise, M. F., and Olaosebikan, T. E., (2025), Solution to Bolza Form Continuous-Time Linear Regulator Problems (CLRP) Using Scaled Extended Conjugate Gradient Method (SECGM) Algorithm, Journal of Emerging Technologies and Innovative Research (JETIR), Volume 12, Issue 5, 493-501.
  20. Hestenes, M. R., and Stiefel, (1952), Method of Conjugate gradient method for solving Linear system, J. Res, Natl Bur. Stand. 49, 409-436.
  21. Akinmuyise, Mathew Folorunsho, Adebayo, Kayode James, Adisa, Olabisi, Olaosebikan, Emmanuel Temitayo, Gbenro, Sunday Oluwaseun, Dele-Rotimi, Adejoke Olumide, Obayomi, Abraham Adesoji, Ayinde, Samuel Olukayode, (2024), Extended Runge-Kutta Method (ERKM) Algorithm for the Solution of Optimal Control Problems (OCP), Vol. 19, Issue 2, pp. 471-478, https://doi.org/10.5281/zenodo.10049652#180.

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

[views]

Metrics

PlumX

Altmetrics

Paper Submission Deadline

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER