Tree in C

I recently came back to C after a long time (about 12+ years).

For one thing pointers seem a lot less complicated than I remember, in fact they’re pretty easy to understand. Besides this I’ve found a like the simplicity of the language and the closeness to the machine.

I think what changed from my college days regarding pointers is a better understanding of the stack and exposure to languages like C# with reference and value types.

I’ve written a short program that creates a tree data structure and that allows to navigate it in three different ways:

Depth first pre-order
Depth first post-order
Breadth first

Depth first just means starting from one node and going as deep as possible until you find a leaf node (a node with no children) and then backtracking. You can do this in an iterative way but for me this is so much simpler to understand using recursion.

The difference between pre-order and post-order is when you apply your operation on the tree node, before or after continuing traversal of the tree. You can look at the example below.

Breadth first on the other hand is easier done in an iterative way. The key here is keeping a second data structure, a queue to store the nodes. After you visit a node you enqueue it’s children and visit them in turn, printing their value and enqueuing they’re children in turn.

It’s all pretty simple stuff but I needed a refresher on this topic and also something simple to start writing C code again.

The code is on GitHub.

Here is the code of the main file, tree.c:

#include <stdio.h>
#include <stdlib.h>
#include "queue.c"

struct tree_node {
  int value;
  struct tree_node *parent;
  struct tree_node *left, *right;

typedef struct tree_node node;
typedef void (*node_func)(node *);

node * create_node(int, node *);
node * build_tree();
void breadth_first(node *);
void create_or_enqueue(q_node **, node *);
void depth_first_pre(node *, node_func);
void depth_first_post(node *, node_func);
void print_node(node *);
void free_node(node *);

int main() {
  node *root = build_tree();

  // depth_first_pre(root, print_node);
  depth_first_post(root, free_node);

  return 0;

node * create_node(int value, node *parent) {
  node *new = malloc(sizeof(node));
  new->value = value;
  new->parent = parent;

  return new;

/* builds a tree with a preset of data */
node * build_tree() {
  node *root = create_node(2, NULL);
    root->left = create_node(7, root);
      root->left->left = create_node(2, root->left);
      root->left->right = create_node(6, root->left);
        root->left->right->left = create_node(5, root->left->right);
        root->left->right->right = create_node(11, root->left->right);
    root->right = create_node(5, root);
      root->right->right = create_node(9, root->right);
        root->right->right->left = create_node(4, root->right->right);

  return root;

/* Pre-order Depth-first traversal */
void depth_first_pre(node *current, node_func func) {
  if (current == NULL) {


  depth_first_pre(current->left, func);
  depth_first_pre(current->right, func);

/* Post-order Depth-first traversal */
void depth_first_post(node *current, node_func func) {
  if (current == NULL) {

  depth_first_post(current->left, func);
  depth_first_post(current->right, func);


/* breadth-first traversal */
void breadth_first(node *current) {
  if (current == NULL) {

  q_node *head = create_qnode(current);

  while (head != NULL) {
    q_node *q_node_ptr = dequeue(&head);
    node *node_ptr = q_node_ptr->value;
    printf("%d, ", node_ptr->value);
    q_node **head_ptr = &head;

    if (node_ptr->left != NULL) {
      create_or_enqueue(head_ptr, node_ptr->left);

    if (node_ptr->right != NULL) {
      create_or_enqueue(head_ptr, node_ptr->right);

void create_or_enqueue(q_node **head, node *next_node) {
  if (*head == NULL) {
    *head = create_qnode(next_node);
  else {
    enqueue(*head, create_qnode(next_node));

void print_node(node *current) {
  printf("%d, ", current->value);

void free_node(node *current) {

Here is my simple implementation of a queue (keep in mind that I haven’t done any C programming for more than a decade) which I use for the breadth first algorithm:

#include <stdlib.h>

struct queue_node {
  void * value;
  struct queue_node *next;

typedef struct queue_node q_node;

q_node * create_qnode(void *);
void enqueue(q_node *, q_node *);
q_node * dequeue(q_node **);

q_node * create_qnode(void * value) {
  q_node *new = malloc(sizeof(q_node));
  new->value = value;

  return new;

void enqueue(q_node *head, q_node *new) {
  if (new == NULL || head == NULL) {

  q_node *node_ptr = head;
  while (node_ptr->next != NULL) {
    node_ptr = node_ptr->next;

  node_ptr->next = new;

q_node * dequeue(q_node **head) {
  if (*head == NULL) {
    return NULL;

  q_node *old_head = *head;
  *head = (*head)->next;

  return old_head;

The function to create the data uses the tree from the Wikipedia article on Tree.

Public domain Tree diagram.
Public domain Tree diagram.

Here is the output from traversing the tree depth first pre-order:
2, 7, 2, 6, 5, 11, 5, 9, 4

And here is the output from traversing the tree breadth first:
2, 7, 5, 2, 6, 9, 5, 11, 4

If you have any suggestions to improve the code or see any mistakes don’t hesitate to point them out in the comments.

I will probably look into doing some basic compression in C next and then as a further project I might end up converting these in Rust for the experience of comparing the two languages. Hopefully I might get some insights into Rust at the same time.

A better algorithm to generate random names

In October I published a post about an Algorithm to generate random names. If you came here looking for a way to generate random names, I suggest you go read that post, skim over the source code and then later come back here for the much better version.

The program worked in two parts, a script to generate probability tables and a class called RandomNameGenerator. The script was quickly thrown together. I wanted to save time since this was a small part of a larger project.

When it came time, months later, to add more features and correct some bugs I quickly regretted my decision to “quickly throw together” a script to generate the probability tables. The whole thing was hard to understand and extend. The version I was working with was more involved than what I had originally posted, and using a simple script wasn’t cutting it any more. I had fallen into the proverbial “working fast but taking much more time in the end” trap.

So I started the whole thing over using test-first BDD, OO principles and applying care to my work. In not much more time than it took me to write the first version I had a working second version with the features I wanted and I got rid of the bugs that were bugging (pun intended) me.

I put the whole thing on GitHub so you can go there for the source code and to get a working copy.

I have included some sample data to use to generate names. It’s under the /media directory and it uses names from Greek mythology (all taken from Wikipedia).

There is a small program called fix_sample.rb that will try to convert a sample file to the desired format. The sample is expected to be in following format:

firstName secondName thirdName

Which is a space-delimited plain text file that starts and ends with a space.

The tests are found in the /spec directory.

A sample program, random_name_generator_test.rb provides a sample usage of the program for those who want to know how to call it.

Since this test program is very small I’ll reproduce it here in it’s entirety:

require_relative 'random_name_generator'

generator ="media/greek_myth_sample")

puts "Generating 40 names"
40.times {puts generator.generate}

This test program will generate 40 random names and output them to the console, that’s all there is to it.

Here is a sample output:


Take care.

Algorithm to generate random names

I recently wanted to create an algorithm that could generate random fantasy names. The goal was to have the algorithm produce a new random name each time it was called.

Returning a random “normal” name is pretty easy (ie: John, Robert, Stacy). You just take a big list of names (like from the US census) and draw one out at random.

If you rather want to create some made-up name, like one you could find in a fantasy novel, video game or sci-fi movie and you want it to be random, then you need to use an algorithm.

Here is a sample of random male fantasy names generated with the technique I will describe:



Stringing random letters together would give you something like: “dkwidfjwz”, which isn’t very convincing.

The proper technique consist of parsing a text file to create probability tables. These indicate the probability of a letter following another letter. Something that says the letter A has 50% chance of being followed by E and 50% of being followed by F.

While you could take existing letter probability from the English language, it is very important that make your own probability tables created from your own sample data. Otherwise you won’t get good names from your output. This is because standard probability tables are made using common names rather than proper names.

Building your probability tables

This simple script was rapidly thrown up together and won’t win me any awards for coding style but it will show you how to generate a two letter pair probability table.

# simple scripting to generate probability tables from a file

# this script takes two parameters, the first is the input file 
# and the second is the output file

require "yaml"

input_file = ARGV[0]
output_file = ARGV[1]

# treat all letters as well as spaces
chars = ('a'..'z').to_a.push(' ')

last_char_read = " "
frequencies =

# parse the file to read letter pair frequencies do |file|
	while char = file.getc
		if ('a'..'z').to_a.include?(char.downcase)
			if chars.include?(last_char_read.downcase)
				frequencies[last_char_read.downcase + char.downcase] += 1
		last_char_read = char

# get the total count of each single letter
letter_total_count =
frequencies.each {|key, value| letter_total_count[key[0]] += value}  
letter_total_count[frequencies.keys.last[1]] += 1

# the final hash will contain our, ahem, final result
final =
frequencies.sort.each {|key, value| final[key] = (value / letter_total_count[key[0]])  }  

# make a running total 
chars.each do |first_letter|
	running_total = 0.0

	('a'..'z').each do |second_letter| 
		if final.key? first_letter + second_letter
			original_value = final[first_letter + second_letter] 
			final[first_letter + second_letter] += running_total
			running_total += original_value

# output to file for later use, "w") {|file| file.puts YAML::dump(final)}

Here is a partial sample of what is being generated in the output file:

ba: 0.21774193548387097
be: 0.5403225806451613
bi: 0.5887096774193548
bl: 0.6129032258064515
bo: 0.814516129032258
br: 0.9274193548387096
bu: 0.9919354838709677
by: 1.0

As we will see in the Improvements section, this basic script can be improved upon to get better results.

Generating the random name

To generate a random file from such a file you basically generate a random number from 0.0 to 1.0 and get the corresponding letter.

Here is how you could generate a random name:

require "yaml"

class RandomNameGenerator
	def initialize	
		@random_number_generator = 
		@probability_tables = YAML.load_file('your_data_file_name_here')
	def generate
		name = get_next_letter(' ')
		@random_number_generator.rand(4...10).times do			
			next_letter = get_next_letter(name[name.length - 1])
			while next_letter == name[name.length - 1] && next_letter == name[name.length - 2]
				next_letter = get_next_letter(name[name.length - 1])
			name += next_letter	

		return name

	def get_next_letter(current_letter)	
		random_number = @random_number_generator.rand(0.0..1.0) {|k, v| k[0] == current_letter && 
		 	  															 v >= random_number}.first[0][1]

If you are using Ruby 1.9.2 or above, use the new Random class and reuse an instantiated object rather than creating a new for each “roll”. This way your results will be much more random. (Detailed explanation here)

Sample data

To get better results it is important to have good sample data. I can’t stress that enough. You need to give some thought about what to put in the file you will parse.

Consider the following points:

  • The source of your data: If you create your sample data using names from Lord of the Rings, expect to get names similar to those in Lord of the Rings. Your data should have a single or a very limited number of themes.
  • Sample size: A good sample size is important to get variability.
  • Sample representativeness: Your data sample should be representative. Your data should include rare letter combinations, but these should be infrequent. Think about how frequently you want certain letter combinations to come up and reflect this in your sample data.
  • Many samples: Be sure to use different samples for men, woman, places or different themes.


My current program is more complex than what I have shown you here but I have a feeling this post is already far too long. I should really keep my post shorter if I want people to read them. For brevity I didn’t go in all the gory details but hopefully what you have here will get started on the correct path.

Here’s what to do next:

  • Generate both two letters pairs and three letters triplets in your probability tables.
  • Favor using triplets in your random generator but have pairs as a fallback when needed.
  • Instead of using a random value for name length, use a normal, Cauchy or Gaussian distribution to get name length. You could also build an average name length from your sample data.
  • Prevent long repetition of a single letter in output results (ie: Maeeeel). This is much less of a problem if you are using triplets.

Hope you have fun with all of this!

UPDATE: I have created a better version of this algorithm. You can find it here on GitHub and I also made another post, A better algorithm to generate random names that contains explanations. This post still contains useful complementary information that should be read first.

Using Gaussian blurring on heightmaps

In my previous blog post, I talked about a personal project where I am generating heightmaps. I ended that post writing about how blurring has been important to me in generating the maps. Here is more information on the blurring.

First, I found out that rather then generating a lot of points, I could generate a lot less points and apply heavier blurring to get a satisfactory result much more rapidly.

For my blurring, I ended up using a Gaussian blur (sometimes called Gaussian filter). This type of filter is often used with height maps as it gives good natural looking results. Something very similar to erosion.

A Gaussian filter does this by giving more weight to the pixel being blurred rather than giving equal weight to the center pixel and each of it’s neighbors as is the case in the mean filter.

The problem

The problem I was facing was that my heightmaps needed to be blurred. Rather than create a blurring class that allows me to specify various degrees of blurring, work on images or even specify different types of blur filter (ie: gaussian, mean, etc.), I wrote the simplest thing that could solve my problem.

Applying YAGNI here has allowed me to complete this quickly and get on with the rest of the project. Years earlier I would have started a cool blurring library which I wouldn’t had the time to complete.

The solution

The Gaussian filter is a filter where the values of the kernel are calculated using the Gaussian function to produce values falling in a normal distribution. Like other filter (ie: the mean filter), the Gaussian filter works with a kernel which is a matrix.

At it’s simplest, a non-gaussian kernel could look something like this :

0.1, 0.1, 0.1,
0.1, 0.1, 0.1,
0.1, 0.1, 0.1,

We use the kernel to recalculate the value of each pixel of an image (or more simply of an array) by multiplying its actual value with the center position in the kernel and then multiplying the value above that pixel with the value above in the kernel and so on.

After we have we have multiplied all the nine values (for our 3 x 3 kernel), we add them up and store them in a single cell in a second array. We must do this for all points in our image or array.

When doing this, don’t forget to work with a secondary array so as to not modify the original. This is because you will be reading the original again when calculating the other pixels surrounding the first one you calculated.

For a Gaussian filter we would calculate the values of the kernel using a Gaussian distribution formula. I will not go into the details of the mathematical formula. You can find all the nitty gritty on the links at the beginning of the post or the references at the end.


Rather than create a 2d kernel, it is also possible to use a modified formula to create a 1d kernel. Running this 1d kernel twice, once horizontally and once vertically, produces a result equivalent to running the 2d kernel.

The difference is that running the 1d kernel twice is faster than running the 2d kernel once as it requires less operations. A neat trick.

The degree of blurring (how heavy to blur) is controlled by changing the number of standard deviations in our Gaussian distribution formula.

Rather than calculate a new kernel every time with differing values, another trick is to use the same kernel to blur our image many times in succession to obtain heavier blurring. If this is parallelized, this can also lead to speed optimizations.

My implementation

For my implementation, I have used a precomputed 1d kernel I have found here and run it both horizontally and vertically.

Precomputed 1d kernel: [0.006, 0.061, 0.242, 0.383, 0.242, 0.061, 0.006].

If I need light blurring I will blur once and if I need heavier blurring (my most common scenario) I will blur twice in a row. Also I applied the blurring on an array of int and only created the image once the blurring was done which is much simpler than working with an image file.

Fast and simple to implement.

An issue I had to solve while implementing the blur was what to do when the filter/kernel is working with values on the edge of the array.

In these cases there are many possible solutions but I chose to use the value of the center pixel to fill in for the values of the non existing pixels.

The other choice would have been to wrap around the edge of the array, or to use a predefined value like 0 or 255. Using the center pixel gave pretty good results.


For the following code note that:

1- I am using a 1d array for my map to represent a 2d array which is unnecessary. If you prefer to use a 2d array, you should go with that.

2- The FilterOutOfBoundsSpecification which is in another file has been appended to the end of this code sample.

3- The code could be refactored as both compute_x_filtered_value and compute_y_filtered_value are almost identical and could be one function.

4- Even though the kernel is symmetrical, I store it likewise:

@filter_kernel = [[-3, 0.006], [-2, 0.061], [-1, 0.242],
				 [0, 0.383], [1, 0.242], [2, 0.061], [3, 0.006]]

But I could save space by storing it this way:

@filter_kernel = [[0, 0.383], [1, 0.242], [2, 0.061], [3, 0.006]]

This would necessitate a small change to the code. I decided to keep the longer version of the kernel because I feel it’s more readable and easier to understand. Saving a few characters isn’t worth it if it complicates things.

5- For the sake of brevity I did not include the tests code.

Enough preamble here is the code:

require_relative './filter_out_of_bounds_specification'

class GaussianFilter
	def initialize
		# the filter kernel value pairs. The first element in each pair represent
		# the distance from the central pixel and the second element the
                # weighted filter value
		@filter_kernel = [[-3, 0.006], [-2, 0.061], [-1, 0.242],
						[0, 0.383], [1, 0.242], [2, 0.061], [3, 0.006]]

	# this function assumes the array represents a square 2d matrix
	def filter(array)
		@size = Math.sqrt(array.length)

		intermediate_filtered_array =, 0)
		filtered_array =, 0)

		# filter horizontally
		(0...@size).each do |y|
			# @size is used since we assume the array is square 2d matrix
			(0...@size).each do |x|
				intermediate_filtered_array[x + y * @size] = compute_x_filtered_value(array, x, y)

		# filter vertically
		(0...@size).each do |x|
			# @size is used since we assume the array is square 2d matrix
			(0...@size).each do |y|
				filtered_array[x + y * @size] = compute_y_filtered_value(intermediate_filtered_array, x, y)

		return filtered_array


	def	compute_x_filtered_value(array, x, y)
		computed_value = 0.0

		@filter_kernel.each do |filter_pair|
			if FilterOutOfBoundsSpecification.satisfied_by?(x, @size, filter_pair[0])
				offset = 0
				offset = filter_pair[0]

			computed_value += filter_pair[1] * array[x + offset + y * @size]

		return computed_value.round

	def	compute_y_filtered_value(array, x, y)
		computed_value = 0.0

		@filter_kernel.each do |filter_pair|
			if FilterOutOfBoundsSpecification.satisfied_by?(y, @size, filter_pair[0])
				offset = 0
				offset = filter_pair[0]

			computed_value += filter_pair[1] * array[x + (offset + y) * @size]

		return computed_value.round

class FilterOutOfBoundsSpecification
	def self.satisfied_by?(z, size_z, offset)
		return true if z + offset >= size_z

		return true if z + offset < 0



Here are the sources that I have used to write my Gaussian filter and this blog post, in order of relevance to my project and this article.

My post does not talk about the math and science side of the Gaussian distribution and about image processing as both subjects aren’t strengths of mine. So I encourage you to read the following links for more info on these subjects.

Gaussian Smoothing

CS 384G: Image processing

Gaussian filter, or Gaussian blur

Wikipedia Gaussian filter

Wikipedia Gaussian blur