Here’s a cleaned-up and structured version of your notes on Gradient Descent, with explanations and clear formatting for better understanding:
Minimize a cost function (e.g., Mean Squared Error) by iteratively updating model parameters (m and c in linear regression) using the gradient (slope) of the cost function.
m and c randomlyn iterations or until convergence:
Slope_m = ∂Cost/∂mSlope_c = ∂Cost/∂cm = m - α * Slope_mc = c - α * Slope_cα = learning rate)❗ The final m and c are approximations, not the exact global minimum—but the algorithm converges quickly to a useful solution.
α)