RSIS International

A Gradient-Based Optimization Algorithm for Ridge Regression by Using R

Submission Deadline: 30th December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline: 21st January 2025
Special Issue on Education & Public Health: Publication Fee: 30$ USD Submit Now
Submission Deadline: 05th January 2025
Special Issue on Economics, Management, Psychology, Sociology & Communication: Publication Fee: 30$ USD Submit Now

International Journal of Research and Scientific Innovation (IJRSI) | Volume V, Issue IV, April 2018 | ISSN 2321–2705

A Gradient-Based Optimization Algorithm for Ridge Regression by Using R

Mayooran, Thevaraja

IJRISS Call for paper

  Department of Mathematics and Statistics, Minnesota State University, Mankato, USA

Abstract: – Ridge regression, or Tikhonov regularization (shrinkage) is a useful method for achieving both shrinkage and variable choice simultaneously. The main idea of Ridge regression is to use the L2 constraint in the regularization step. It has been used to several models in regression analysis such as kernel machines, smoothing sp lines, copula theory and multiclass logistic models. In this study, we discussed simple linear regression and ridge regression parameter estimations via the Gradient-Based Optimization Algorithm for Ridge Regression by Using R and we validate our result by using an example, a gradient-based optimization algorithm for Ridge Regression is the best method for data analysis.

Key words: Ridge regression, Regularization, Gradient-Based Optimization Algorithm

I. INTRODUCTION

Ridge regression is forms of regularized regression. These methods are seeking to alleviate the consequences of multicollinearity. In regression analysis, suppose variables are highly correlated, a larger coefficient value in one of the variable may be alleviated by a large coefficient value in another variable. Regularization imposes an upper threshold on the values taken by the coefficients, thereby producing a more parsimonious solution, and a set of coefficients with smaller variance. Many of the authors studied related to the L1 regularization have focused exclusively on publicly available benchmark datasets. Among the more ambitious and diverse applications, Sardy and Bruce studied the method to detection of incoming radar signatures, Hao Zhang and Grace Wahba et al works Basis Pursuit to epidemiological studies, and Zheng and Michael et al applied Logistic Regression with an L1 penalty for identifying features associated with program crashes. More recently, Yongdai Kim and Jinseog Kim presented a highly efficient but suboptimal approach for the L1 regularization problem and their used a gradient descent-based approach related to L1 regularized regression boosting.





Subscribe to Our Newsletter

Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.