A peek at genetic algorithms

In computer science, genetic algorithms are metaheuristics that help solve optimization problems. Those evolutionary algorithms are inspired by the Darwinian evolution process and rely on iterative generation breeding to get from a (mostly random) initial population of individuals to one that counts many individuals “well-fit” for the task at hand.

While I was looking at AI models and machine learning stuff, I tripped over this concept we had glossed over during one of my mathematical optimization courses this year: genetic algorithms.

I decided to write a short Python code that implements a GeneticModel class meant to provide an easy-to-use interface to create, breed and optimize populations thanks to a genetic algorithm. You can inherit from this class to make your own model – just wait and see below how! The code is available right here.

Most of it is based on my university course, however some ideas were inspired by this article from Louis Nicolle.

Quick note: the project uses the NumPy scientific Python library, so make sure you have it installed!

About genetic algorithms

Genetic algorithms implement common concepts of genetics: natural selection, breeding through DNA crossing and random mutations mostly.

Schematic representation of the 3 evolutionary processes simulated by a genetic algorithm
  • to mimic the natural selection mechanism, genetic models usually consider that the current individuals with the highest performance are more likely to reproduce and therefore become breeders for the next generation – this is called “elitism”; however, to maintain variability, our GeneticModel also picks a given number of “lucky” individuals at random (this reduces the risk the algorithm gets stuck in a local optimum and never tries better solutions)
  • the crossover equivalent is a function that takes in two parent individuals and “mixes” them some way; this is often an element-wise multiplication, or a dot product
  • the mutations are random modifications of an individual, for example by replacing one of its bits (or list of bits) by another one

As a rule of thumb, individuals are considered to be made of 0 or 1 integers – or lists of 0 and 1, or lists of lists of 0 and 1… -, because genetic algorithms are often used for integer optimization problems; it can also be the bitwise representation of the real data.

The GeneticModel I propose here will try to maximize the score of the individuals in each generation. So if you are actually trying to minimize a function, you should return the opposite of the value!

Depending on the particular case you are working on, you are to give the model specific breeding, scoring (or fitness) and mutating functions. Some default ones are hardcoded, but they only have basic behaviors and probably won’t apply to your specific model. Although it is not mandatory, you can also define encoding and decoding functions to prepare data or output it in a more readable way.

Your model can either know what best score to reach and thus stop as soon as it gets one individual that matches that value or is better; or it can work blindfolded for a number of generations and eventually output the best individuals after all this breeding. This means that if you know what the best score is, you should give it to the model so it may stop the search early…

A first example: the Adventurer’s Knapsack problem

Genetic algorithms have proven to give great results for many optimization problems, especially the ones in combinatorial optimization like the Knapsack problem. The question here is: how to best pick items from a set to fill a load-limited bag?

Let’s imagine an adventurer has finally slained the great dragon in the gloomy cave. He is now in the treasure room and proudly loots mountains of gold. He eventually finds himself in front of a chest that contains 6 gems. Each gem has a specific value and a specific weight. The problem is: the adventurer only has a tiny pouch left (it can contain 16 grams maximum… it’s really tiny!). So he has to decide which gem to take back home and cannot bluntly bag them all.

We can sum up the optimization problem in the following form:

  • there are N = 6 different gems
  • each gem i has a value vi and a weight wi:
GemValueWeight (g)
Green Triangle31
Yellow Square44
Gray Pear65
Blue Pentagon86
Red Hexagon158
Purple Octogon2025
  • the adventurer’s bag can only contain up to maxW = 16g

So, what gems should he choose to get the best total price V in his bag (defined as the sum of each vi in the bag), without the total weight W (defined as the sum of each wi in the bag) exceeding the maximum weight maxW?

Schematic representation of the Adventurer’s Knapsack problem

We are searching for solutions that are N-long arrays of 0 and 1: the i-th integer in the array represent whether the adventurer picked the i-th gem or not.

For example, if the adventurer decides to (stupidly) take only the Green Triangle and the Yellow Square, then the matching “individual” (or solution) in our genetic model would be: [1, 1, 0, 0, 0, 0]. This solution is said “feasible” (it obeys the problem constraints) since the two gems sum up to a total weight of w0 +w1 = 1 + 4 = 5g, which is less than the maxW = 16g maximum load.

But, of course, this solution is bad (actually, the only worse solution is taking only the Green Triangle). Let’s create a model that inherits from the GeneticModel class to help us solve this problem.

First, we will import the necessary libraries and define the problem data (the gems values and prices, and the maximum weight the pouch can carry):

Next, we create our encoding function. This is the method that is called by a GeneticModel instance to initialize the population. Here, we take the easy road and consider that, at first, the adventurer only picks the Green Triangle. So every individual is a copy of the array: [1, 0, 0, 0, 0, 0].

The fitness function is the current V value of the bag, i.e. the sum of all the values of the gems that are currently in it:

For the breeding function (that mixes up two parents to make a new child), we will once again do something very simple and pick, at random, one or the other parent. So we are basically duplicating one of the elite individuals.

Of course, in order for our population to change from one generation to the other, we now need to add mutations. Here is our mutator:

It randomly selects one spot and switches it on or off. The thing to notice is that we have to check for the feasibility of the solution: the while loop insures that the mutated solution is feasible by checking if the sum of the weights doesn’t exceed maxW.

To run our brand new model, simply use the run method and pass it the number of individuals in the population (identical at each generation), the size of one individual (meaning the number of bits in its bitwise representation), the maximum number of generations to evolve for and the model-specific functions:

This problem is not hard to solve for a computer: even if we intentionally forced the initial population to be the worst possible, it only takes a few iterations to the genetic algorithm to converge towards the optimum:

Starting evolution.
===================

 Iteration 0/10 (0 %):
 ---------------------
 Current best individual (score = 3.000):
 "[1 0 0 0 0 0]"

 Iteration 1/10 (10 %):
 ----------------------
 Current best individual (score = 18.000):
 "[1 0 0 0 1 0]"

 Iteration 2/10 (20 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 3/10 (30 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 4/10 (40 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 5/10 (50 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 6/10 (60 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 7/10 (70 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 8/10 (80 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

 Iteration 9/10 (90 %):
 ----------------------
 Current best individual (score = 26.000):
 "[1 0 0 1 1 0]"

Finished evolution.
===================

Final population:
5 best individuals:
. score = 26.0: "[1 0 0 1 1 0]"
. score = 26.0: "[1 0 0 1 1 0]"
. score = 24.0: "[1 0 1 0 1 0]"
. score = 23.0: "[1 1 0 0 1 0]"
. score = 23.0: "[1 1 0 0 1 0]"

(Evolution time: 0:00:00.006541)

By plotting the total value V of all the feasible combinations, we can check that this is, in fact, the best solution for our problem:

Plot of the total value V of each feasible solution: we see that [1,0,0,1,1,0] is indeed the best possible pick

A second example: the “string reconstructor”

To showcase a bit more how the GeneticModel class can be used, let’s now create a model that reconstructs a reference string while starting off with only a set of random characters.

We only allow strings that contain characters with an ASCII value between 32 and 122; this corresponds to alphanumeric characters, spaces, punctuation or one of these special characters: #$%&*+-_=(), {}[]@<>/, \`^.

Let’s first define our reference string (this is the result the model will try to match) and the corresponding encoding and decoding functions:

Here, we set our reference and we then write two functions to go from a string to its (8-long bit) bitwise representation (through the ASCII values of the characters) or vice-versa. The transformation acts in the following way:

Transformation of a character into a (8-long bit) bitwise representation

Thus, a set of characters is converted as an array of list (or a multi-dimensional NumPy array):

Transformation of a set of characters into their array representation

In the end, our population is an array of individuals like this, thus it is an array of arrays of arrays of 0 and 1!

Now, our score is the relative number of correct letters (i.e. the percentage of letters that are identical in a given spot between an individual and the reference string). So, our scorer can be defined as:

Therefore, we know that the best possible score is 1. We will provide it to our model afterwards to hopefully escape unnecessary generations.

To breed a new individual from two parents, we will randomly pick one letter from one or the other foreach available spot. Here is our breeder function:

And finally, we want to simulate the mutation process. To do so, we will choose a spot at random and replace whatever character the individual currently has in this location by another random one. This corresponds to the following mutator:

This function produces <span class="pre">size</span> mutated individuals from an array of possible bases (<span class="pre">base</span>).

Now, we just have to run a GeneticModel instance directly thanks to the run() method and give it all the useful data:

If we run this script, we can get an output with information like this (the random initialization and evolution aside):

Current best individual (score = 0.250):
"]FlQoiw@G221"
Current best individual (score = 0.667):
"+ello5worlE@"
Current best individual (score = 0.833):
"Hello7world@"
Reached best score (1.0) with individual:
"Hello world!".

We see that our model selected individuals that were closer and closer to the reference string until it found one with the highest possible score (all letters match the reference) and stopped its evolution.

We can observe how the model progresses on a sentence a big longer – here we plot the score of the best individual at each iteration and the corresponding string:

Evolution of the genetic model searching for the string: “Here is a longer sentence to discover :)”

Isn’t it great how the algorithm gradually discovers on its own the characters to re-create the reference string?

Known Issues & Limitations of Genetic Algorithms

This GeneticModel-derived model works better on small string references, since there is less characters to find. When the reference is long, you have to allow for more generations and/or use a wider population. Still, if it is really long, you may not find an exact match but instead reach the max generations count and end up with a list of best results that are (hopefully) pretty good but not perfect.

For example, if our reference string is: “This is a great sentence, with lots of words and all. But it is a really long reference to analyze.”, then we might get a result such as this:

 Current best individual (score = 0.051):
 "s(qIS^&-aT7uU4Gas@kNGZ8jeLCiGe*tc"\iW6a1I-U-0lHc*g5Z\vq7TlI nX,mh_@531erWqF33;fecS:p!RKQ#nQakk:75m'"
 Current best individual (score = 0.263):
 "\h;AK8sna27XXavEsnnte]co,L6iGLNta=piJg;wIrUKiDhb \(8\ brt.itOXRVa_@$9pe3 Y,S  reZ&:fU2.F*#;a5Dle5v."
 Current best individual (score = 0.343):
 "sh=AS8s>a2;!aavEsnnte"co,_Cit#NlaWsiJgUwIrUKiDnb I58\ Nut.itOXRVao@5'seq "oSg reZ&:eU2.)Tn;a5Dle5v."
 Current best individual (score = 0.414):
 "sh;AS<sja,;uaaqEsnnte'co,LCit#NlaWsi^gUw@rUK0Ynb '5M\ but6it XRMa3r$aPlq "ong reZ9\eUc.FTo;a5Dle5'."
 Current best individual (score = 0.495):
 "Th;[Sisja,guYat0snnte'ce,LCit# laWsi^gUw@rUK0Ynb '5M\ but.it XcMa3reaRl, "ong reB9\eUc.FTo a5Dle5N."
 Current best individual (score = 0.576):
 "Thi[Sisja,guYatEsnntevce,LCit# laWsu^gUworUF0Ynb '5I\ but.it $sMa3reallq "ong ref9Be1ce To anDlekN."
 Current best individual (score = 0.616):
 "Thi[ isja,gH%atRsnnteKce,LCith laWs;^gUworUF0[nb '5M\ but[it isMaPreall, "ong ref9Be1ce To anDleke."
 Current best individual (score = 0.646):
 "Thi[ isja,guYatRsnnteKce,0Cith lots;^gUworUF0#nU '5V\ but[it isMaPreall, "ong ref9Be1ce To anDlyke."
 Current best individual (score = 0.717):
 "Thi[ isja,gueat snnteKce,fwith lots(^fUworUFKanU '5+. vut[it israPreall, ]ong ref9Be1ce To analyhe."
 Current best individual (score = 0.758):
 "Thi[ iska,g8eat snnte_ce,Gwith lots ^fUworUF anu a5&. vut[it isAaPreall, ]ong ref9Bence ;o analyhe."
 Current best individual (score = 0.778):
 "Thi[ iska*g8eat snnte_ce,bwith lots [fUworH) anU a5&. vut it isAaPreall, ]ong ref9rence ;o analyhe."
 Current best individual (score = 0.798):
 "Thi[ iska,great snnte_ce,Gwith lots [fUworHs anu a5&. vut it isAaPreall, ]ong ref9rence 1o analyhe."
 Current best individual (score = 0.808):
 "Thi[ iska,great sente_ce,Xwith lots [fUworHs anu a5&. vut it isAaPreall, ]ong ref9rence 1o analyhe."
 Current best individual (score = 0.838):
 "Thi[ iska,great sente_ce,Xwith lots ofUwords anu a5l. vut it isRaPreall, Yong ref9rence 1o analyhe."
 Current best individual (score = 0.869):
 "This iska,great sentence,Xwith lots ofUwords anu a5l. vut it is aYreall, Yong ref9rence 1o analyhe."
 Current best individual (score = 0.879):
 "This iska,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, Yong reference 1o analyhe."
 Current best individual (score = 0.879):
 "This iska,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, Yong reference 1o analyhe."
 Current best individual (score = 0.889):
 "This is a,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, "ong reference 1o analyhe."
 Current best individual (score = 0.889):
 "This is a,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, "ong reference 1o analyhe."
 Current best individual (score = 0.889):
 "This is a,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, "ong reference 1o analyhe."

Best individuals:
['This is a,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, "ong reference 1o analyhe.',
 'This is a,great sentence,Xwith lots ofUwords anu a5l. \\ut it is a?reall, "ong reference 1o analyhe.',
 'This is a,great sentence,Xwith lots ofUwords anu a5l. vut it is a?reall, "ong reference 1o analyie.']

Besides the fact that it takes way longer to compute, we see that the genetic algorithm did not manage to get an exact match. At the end, it displays the three best candidates: those strings are close to the reference but they are not completely right.

We can also note two things:

  • the best candidates show us that the population might contain duplicates: we don’t check for uniqueness when breeding or mutating our individuals
  • despite introducing variability with the “lucky picks” and the mutations, the algorithm might stay stuck on an optimum for a while (for example, it got a good individual with a 0.889 score and kept it during several iterations)

So, as we have seen, genetic algorithms are fit for combinatorial optimization problems like the Knapsack problem.

Moreover in machine learning, genetic algorithms can be a good alternative to the classic gradient descent for optimization strategies when trying to fine-tune a neural network – it sometimes converges faster and gives better results. If you want to learn more about this, you can take a look at this great YouTube video by Siraj Raval.

(A quick sidenote: I discovered Siraj’s YouTube channel recently and I really encourage you to take a look at it if you’re into AI – he does great stuff on lots of topics and goes from the theory to the real implementation, which is very cool I think!)

However, there are limitations. In particular (as detailed on the Wikipedia page):

  • the fitness function can be complex and require some computing time
  • genetic algorithms aren’t good with scalability: the wider your population, the more time and power running the algorithm will take
  • they tend to converge and stick with local optima, ignoring the big picture that may present better candidates

Also, if your problem has a true/false kind of evaluation for the individuals (meaning the fitness function gives a binary score and not a continuous result), a genetic algorithm is not the best way to go: you will get the same results with a random search.

Variants, improvements, future developments…

Our examples show something about genetic algorithms: they are usually great at finding solutions that are globally okay, but they have trouble going from those to the absolute optimum. Other optimization methods like hill climbing can be combined with genetic algorithms to improve this local search. In other means, using genetical algorithms and local search algorithms alternately is sometimes a way to solve an optimization problem efficiently.

Adaptive genetic algorithms are another possible improvement. The idea is to adjust the rate of crossover and mutation during the evolution, instead of using fixed values. This may give a better convergence to the algorithm, while maintaining variability in the population.

Writing a better mutation function could also be nice. Here, we replace one character at random by another that matches the condition (an ASCII value between 32 and 122). It would be way more clever to only try and switch the ones that are wrong, and perhaps even look at its neighbors to devise what character could be put in this spot… For example, we could write up the following improved mutator:

This new mutator allows a model to discover the (quite long) sentence: “Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed non risus. Suspendisse lectus tortor, dignissim sit amet, adipiscing nec, ultricies sed, dolor.” in only 800 generations and 1’42” or so!

We notice that this mutator is less prone to “local optimum sticking” – compared to the previous mutator, this one gives a model that doesn’t keep the same elite for too long.


The implementation I offer here is far from perfect and quite limited. Even if your model – and therefore the problem it solves – is truly defined by your functions, so you have the power here!, the evolution rules of the GeneticModel are simple and only rely on breeding and mutation.

It would be possible to give the user a better control on the crossover and mutation rates, on the population characteristics and on data formatting.

Also, an optimization problem is often defined by the function to optimize (here, the fitness function) and the solutions domain (the constraints on the possible individuals). In its current form, the GeneticModel doesn’t provide an easy constraints-definition procedure. You sort of have to do it in the encoding, the breeding and the mutating functions, which is not ideal.

References
  1. I. Wikimedia Foundation, “Genetic algorithm” (https://en.wikipedia.org/wiki/Genetic_algorithm), August 2018. [Online; last access 11-September-2018].
  2. I. Wikimedia Foundation, “Knapsack problem” (https://en.wikipedia.org/wiki/Knapsack_problem), August 2018. [Online; last access 11-September-2018].
  3. L. Nicolle, “Was Darwin a Great Computer Scientist?”
    (https://blog.sicara.com/getting-started-genetic-algorithms-python-tutorial-81ffa1dd72f9), August 2017. [Online; last access 11-September-2018].
  4. I. Wikimedia Foundation, “Hill climbing” (https://en.wikipedia.org/wiki/Hill_climbing), August 2018. [Online; last access 11-September-2018].
     

Leave a Reply

Your email address will not be published. Required fields are marked *