Figure 1: Particle swarm optimization on a convex function
We saw different gradient based optimizations in the previous examples. Particle swarm optimization is a Population Method. The algorithms starts by initializing multiple particles in the search-space. Each particle keeps track of its local best known position and also the best global position. Particle’s position and velocity are updated iteratively, such that the position moves towards the best solution. Since multiple particles are searching for the best solution in the search-space, there is a higher possibility of finding the global best solution.
Steps of the algorithm are as below:
Initialize particles position (vector $ x_i $), each particles’ best known position (vector $ p_i $), each particle’s initial velocity (vector $ v_i $)
At each iteration update the velocity and position: $$ x_i = x_i + v_i $$ $$ v_i = w * v_i + c_1 * r_1 * (p_i - x_i) + c_2 * r_2 * (p_{best} - x_i) $$
Where
$ w $ is inertia weight
$ c_1 $ and $ c_2 $ are momentum coefficients or social coefficients
$ r_1 $ and $ r_2 $ are random numbers in the range U(0, 1)
$ p_i $ is the best position found for the current particle
$ p_{best} $ is the best position across all the particles
Below example uses pyswarms package to optimize a convex function.
We can see better application of particle swarm approach for non-convex functions in Particle Swarm Optimization on Non-Convex Function and Particle Swarm Optimization on Six-Hump Camel Back Function