Frank–Wolfe algorithm

Frank–Wolfe Algorithm

Frank–Wolfe algorithm

to get instant updates about 'Frank–Wolfe Algorithm' on your MyPage. Meet other similar minded people. Its Free!


All Updates

In mathematical optimization, the reduced gradient method of Frank and Wolfe is an iterative method for nonlinear programming. Also known as the Frank–Wolfe algorithm and the convex combination algorithm, the reduced gradient method was proposed by Marguerite Frank and Phil Wolfe in 1956 as an algorithm for quadratic programming. In phase one, the reduced gradient method finds a feasible solution to the linear constraints, if one exists. Thereafter, at each iteration, the method takes a descent step in the negative gradient direction, so reducing the objective function; this gradient descent step is "reduced" to remain in the polyhedral feasible region of the linear constraints. Because quadratic programming is a generalization of linear programming, the reduced gradient method is a generalization of Dantzig's simplex algorithm for linear programming.

The reduced gradient method is an iterative method for nonlinear programming, a method that need not be restricted to quadratic programming. While the method is slower than competing methods and has been abandoned as a general purpose method of nonlinear programming, it remains widely used for specially structured problems of large scale optimization. In particular, the reduced gradient method remains popular and effective for finding approximate minimum–cost flow in transportation networks, which often have enormous size.

Problem statement

Minimize <math> f(mathbf) =......

Read More

No feeds found

Posting your question. Please wait!...

No updates available.
No messages found
Tell your friends >
about this page
 Create a new Page
for companies, colleges, celebrities or anything you like.Get updates on MyPage.
Create a new Page
 Find your friends
  Find friends on MyPage from