Generating random names using maths?

Wanna learn how to use probabilities and randomness to procedurally generate nice-sounding names? 🙂

This article is also available on Medium.

In a recent series of posts, I talked about finite state machines (FSMs) and how they can model entity behaviour. But state machines can also be used to generate data.

Recently, I used state machines to create a little name generator called rangen-name. (Ultimately, it will be part of a suite of random generators and procedural generation tools: rangen – even though for now it’s the only one 😉 ) It’s written in vanilla JavaScript and it’s available either on NPM or as a Github repository.

It produces really nice names that I think are quite credible! For example, it generated those ones:

Barbara Ovie
Jefferson Kerlyn
Torr Farl
Missy Brea
Albatros Riddym
Eunice Harmanlew
Paddie Crid
Sollie Peltmar
Quint Wort
Dawson Haddye

The tool can produce either a firstname (that’s picked at random from a list of common firstnames), a lastname (that produced thanks to a Markov chain, a specific kind of FSM) or a combination of the two to give you a full name, like here.

Note: you can pass it either a “male” or “female” parameter to specify the gender of the name you want to generate! 😉

TL;DR: how to get and use the rangen-name package!

If you want to try it out without installing anything, you can use the CDN version of the package and use it directly in an HTML page:

Or else if you’re working with Node, just:

  • download the package using npm or yarn:
npm install @mpecheux/rangen-name
# or: yarn add @mpecheux/rangen-name
  • import it in your project using the ES5 or ES6 import syntax:
ES5 (require syntax)
const rangenName = require("@mpecheux/rangen-name");

const name = rangenName.generateFullName();
console.log(name);
// Karleen Newe
ES6 (import syntax)
import rangenName from "@mpecheux/rangen-name";

const name = rangenName.generateFullName();
console.log(name);
// Vincent Bows

For more info on how to use the lib and the various parameters you can pass in, make sure to check the Github page!

About this lib…

Now – for the ones who are interested in learning more about how this package works, here is a little sum up of the mathematical/modeling concepts it uses under the hood 🙂

Step 1: Getting a random firstname

That’s the easy part! The lib simply loads up a list of male and female firstnames and it then picks one at random depending on the sex parameter you gave it (or, by default, it has a 50/50% chance of returning a male/female firstname).

Step 2: Generating a lastname using Markov chains

Markov chains are a specific kind of finite state machines. So, before we dive into Markov chains, let’s first recall how finite state machines work.

A quick reminder on FSMs

FSMs are a particular task switching structure that relies on the following concepts:

  • we have a given set of states interconnected by transitions
  • there is one current (or active) state
  • the transitions are triggered either regularly or by specific inputs
  • and they force the machine to switch from one state to another

Note: if you want to learn more about FSMs, make sure to check the linked articles – they’re available in text or video format 🙂

In my FSM tutorials, we were talking about deterministic FSMs because they were about video games and, in video games, you usually want to control the behaviour of your entities in a predictable way. But there is, however, another type of FSMs: the stochastic state machines.

Suppose a state is connected to multiple other states and all those states are linked by a transition with the same trigger condition; then what happens when the trigger is run? Which state should you transition to?

In a deterministic FSM, that situation can’t happen. It’s forbidden. But in a stochastic FSM, you simply choose one of the connected states at random.

And such a stochastic system can model real-life events or “replay” real-life scenarios… especially if you control the randomness! Enters the Markov chain…

So, what are Markov chains?

Let’s say we want to model how Julia goes from one social network to another throughout her day. She likes to go on 3 networks: Facebook, Twitter and Instagram. But depending on the network she’s currently browsing, she’s more or like likely to switch to others.

We can model this thanks to a stochastic state machine with 3 states (one for each network) and various transitions. But here, not all connections are as likely to happen: we need to define a specific probability distribution that represents how likely Julia is to go from the network she’s on (the current active state) to another (a connected state).

We can write down our specific state machine as a table of transition probabilities:

FacebookTwitterInstagram
Facebook0.80.20
Twitter0.30.10.6
Instagram0.30.40.3

In this table, the rows represent the current cell and the columns represent the possible next states. The floats in each cell are the probability Julia has of going from this current state to the given new state. You can see that the probabilities for all states out of a given state always sum to 1: you necessarily go somewhere, even though that new state is picked at random.

But Markov chains are usually more readable as graphs:

Here, the label on each arrow indicates the probability for this transition to occur. We can see several things on this diagram:

  • some transitions are highly probable, like Julia staying on Facebook when she’s already there (because 0.8 is quite close to 1)
  • some transitions are equally probable: when on Instagram, Julia is just as likely to stay as she is to switch to Facebook
  • some transitions are impossible: Julia can’t go from Facebook to Instagram directly

For a more formal definition of Markov chains, we can take a look at the wiki page:

A Markov chain is a a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.

So a Markov chain basically depicts the evolution of a finite state machine where state transitions are not deterministic but rather depend on probabilities (they are stochastic), and where those probabilities follow a specific distribution.

They were originally conceptualised by the Russian mathematician Andrey Markov and they always verify the Markov property: this property roughly states that the system is “memoryless”, meaning you can always derive the next step of the chain just by looking at its current state (you don’t need the full history of the chain to know its future behaviour).

But sometimes, it’s interesting to have “a bit more memory” and implement a n-steps Markov chain. In this case, the probability to go to a given state at iteration n depends on your current state (at iteration n) but also the one you had at iteration n – 1, n – 2… and so on.

Now – see how, before, we represented our 1-step Markov chain by a 2D table? We can extend this idea to n-steps chains as well! An n-steps Markov chain can be represented by an (n + 1)-D matrix where each dimension represent an iteration of the chain, and the last dimension represents the transition from the current state to the next state.

Using a 2-steps Markov chain for name generation

In the rangen-name package, I am using a 2-steps Markov chain to generate names. I basically have 26+ states (one per letter, plus some special characters) and they are all linked by transitions with various probabilities. Those probabilities are stored in a 3-D matrix.

But the question is: how did I populate this matrix?

I actually used a webcrawler to fetch a list of English surnames from the net and then read those to fill in my matrix cell per cell. It’s like some cryptography techniques or shift ciphers where you study the frequency of letter sequences to infer what sequence it could be; suppose that you are working on French texts, then the sequence “an” is very frequent, while the sequence “gz” is pretty much impossible!

By going through the list and upping a counter for each sequence every time you encounter it, you end up with a probability matrix for letter sequences based on your reference. It’s important to understand that this reference will have a huge impact on the outputs, since it determines the possible transitions in your state machine.

The next trick is to also remember all the possible word starts so that you have a little “starter” for your Markov chain. Remember that we have a 2-steps chain, so we need a history of two letters to infer the next letter! Meaning that when you read through your reference list, you also remember the 3-letters starts of each item.

Then, your generator can simply pick one of those “starters” at random and use it to initialise your Markov chain.

Note: you could go even further and also have a specific distribution for those starters but, here, I just went for a plain old uniform distribution 😉

Finally, you should also store the possible word ends so that, when your chain “guesses” a new probable letter, you can check if the name you’ve generated up to that point could possibly stop there.

Conclusion

The rangen-name NPM package is just a simple name generator, but it already works quite well! It relies on basic probabilities that are formalised as a Markov chain: a specific type of stochastic finite state machine where transitions are picked according to a specific probability distribution.

This distribution can be determined by hand; in my case, I constructed my probability matrix thanks to a list of reference names that I crawled on the net and that I injected into my lib to get the most probable word starts, word ends and letter sequences.

This tool is yet another example of how mathematics can help generate data procedurally (for another case scenario, you can check out a recent article I wrote about using math noises for generating 2d heightmaps and 3d landscapes): by defining a basic system and a set of rules (here: our probabilities), you can “control the randomness” and create really credible data.

Feel free to test this tool and give me some feedback! Also, don’t hesitate to post comments with ideas of other tools you’d like to see in the rangen suite 🙂

Leave a Reply

Your email address will not be published.