[ad_1]
First of all, let’s define our hypoparameters. Like in many other metaheuristic algorithms, these variables should be adjusted on the way, and there is no versatile set of values. But let’s stick to these ones:
POP_SIZE = 10 #population size
MAX_ITER = 30 #the amount of optimization iterations
w = 0.2 #inertia weight
c1 = 1 #personal acceleration factor
c2 = 2 #social acceleration factor
Now let’s create a function which would generate a random population:
def populate(size):
x1,x2 = -10, 3 #x1, x2 = right and left boundaries of our X axis
pop = rnd.uniform(x1,x2, size) # size = amount of particles in population
return pop
If we visualize it, we’ll get something like this:
x1=populate(50)
y1=function(x1)plt.plot(x,y, lw=3, label='Func to optimize')
plt.plot(x1,y1,marker='o', ls='', label='Particles')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.grid(True)
plt.show()
Here you can see that I randomly initialized a population of 50 particles, some of which are already close to the solution.
Now let’s implement the PSO algorithm itself. I commented each row in the code, but if you have any questions, feel free to ask in the comments below.
"""Particle Swarm Optimization (PSO)"""
particles = populate(POP_SIZE) #generating a set of particles
velocities = np.zeros(np.shape(particles)) #velocities of the particles
gains = -np.array(function(particles)) #calculating function values for the populationbest_positions = np.copy(particles) #it's our first iteration, so all positions are the best
swarm_best_position = particles[np.argmax(gains)] #x with with the highest gain
swarm_best_gain = np.max(gains) #highest gain
l = np.empty((MAX_ITER, POP_SIZE)) #array to collect all pops to visualize afterwards
for i in range(MAX_ITER):
l[i] = np.array(np.copy(particles)) #collecting a pop to visualize
r1 = rnd.uniform(0, 1, POP_SIZE) #defining a random coefficient for personal behavior
r2 = rnd.uniform(0, 1, POP_SIZE) #defining a random coefficient for social behavior
velocities = np.array(w * velocities + c1 * r1 * (best_positions - particles) + c2 * r2 * (swarm_best_position - particles)) #calculating velocities
particles+=velocities #updating position by adding the velocity
new_gains = -np.array(function(particles)) #calculating new gains
idx = np.where(new_gains > gains) #getting index of Xs, which have a greater gain now
best_positions[idx] = particles[idx] #updating the best positions with the new particles
gains[idx] = new_gains[idx] #updating gains
if np.max(new_gains) > swarm_best_gain: #if current maxima is greateer than across all previous iters, than assign
swarm_best_position = particles[np.argmax(new_gains)] #assigning the best candidate solution
swarm_best_gain = np.max(new_gains) #assigning the best gain
print(f'Iteration {i+1} \tGain: {swarm_best_gain}')
After 30 iteration we’ve got this:
As you can see the algorithm fell into the local minimum, which is not what we wanted. That’s why we need to tune our hypoparameters and start again. This time I decided to set inertia weight w=0.8, thus, now the previous velocity has a greater impact on the current state.
And voila, we reached the global minimum of the function. I strongly encourage you to play around with POP_SIZE, c₁ and c₂. It’ll allow you to gain a better understanding of the code and the idea behind PSO. If you’re interested you can complicate the task and optimize some 3D function and make a nice visualization.
===========================================
[1]Shi Y. Particle swarm optimization //IEEE connections. — 2004. — Т. 2. — №. 1. — С. 8–13.
===========================================
All my articles on Medium are free and open-access, that’s why I’d really appreciate if you followed me here!
P.s. I’m extremely passionate about (Geo)Data Science, ML/AI and Climate Change. So if you want to work together on some project pls contact me in LinkedIn.
🛰️Follow for more🛰️
[ad_2]
Source link