Training the datasets - Difference between 2 Optimization techniques

From what I have learnt so far, there are 2 Optimization techniques used for Training the Datasets (or Data Models) as follows:

a) Gradient Descent (esp. SGD i.e. Stochastic Gradient Descent) &
b) QP (Quadratric Programming) technique

Both the aforesaid Algorithms are used for Optimization as well as both function from back-end i.e. behind the scenes.

May I know what are the differences between SGD & QP techniques?

Hi @ss7dec,

This is a link to a paper which clearly explains the two optimization techniques along with the mathematical details.

Optimization Techniques

I hope it helps

Thanks a lot Ankur for sharing the link… :star_struck: :+1: :grinning: :pray:t6:

1 Like

Hi @ss7dec,

No problems mate. Happy to help.