Image Transformations: The Sequel

A while back I wrote an article about a simple scripting language I created called ImageQuery. The project was a small proof-of-concept, and while it worked, it had numerous issues. The first issue was that it made some weird assumptions about how a scripting language should work – I won’t go into too much detail about this. The second issue was that it was written in C# – now, I love C# and the .NET framework, but it isn’t exactly the most ideal language to implement another scripting language in (and creating a language on top of .NET is a bit overkill for what I’m doing).

A rewrite in C++ was in order, and imquery was born.

imquery is a dynamically-typed interpreted language focused on image manipulation. Here’s a quick sample that implements the recursive Fibonacci algorithm:

func fib(n) {
  if n < 3 {
    return 1;

  return fib(n - 1) + fib(n - 2);

# Calculate the 6th Fibonacci number and print it.

The syntax is similar to C albeit without types. This changes a bit when we start throwing images, inputs, and outputs into the equation:

in myInput = image();
out myOutput = image(myInput.w, myInput.h);
myOutput: {(color.r + color.g + color.b) / 3.} from myInput;

This bit of code can be run through iqc, the imquery command-line frontend like so (assuming the script is called greyscale.imq):

iqc -f "greyscale.imq" -i "input=image_load('flowers.png')" -o "output=result.png"

Assuming flowers.png is this image:

Then after a bit, iqc will give us this image in result.png:

This is a very simple greyscale filter. Let’s go through this line-by-line:

in myInput = image();

imquery scripts support the concept of inputs and outputs. These are (mostly) normal variables that can be used by whatever program is calling the script. In the case of iqc, we can specify inputs via the -i flag and specify destinations for images (outputs) with the -o flag.

Inputs must be set to something on declaration, and their value cannot be changed once set by the script itself. Additionally, the type of the input matters – it’ll be used to check whether or not we’ve set a valid input from outside the script. If we don’t specify the value of an input outside of the script, the input will

Here, we’re telling imquery that myInput is an image.

out myOutput = image(myInput.w, myInput.h);

Outputs work similarly to inputs, except that they can be set as many times as you want (though you always have to include the out flag) and don’t need to be set when they are declared (they default to nil). iqc any outputs we specify and saves them to image files (other types of outputs are not currently supported).

For myOutput, we are storing a new image of the same dimensions as myInput.

myOutput: {(color.r + color.g + color.b) / 3.} from myInput;

This is called a selection. It has a few different forms, but the most basic has an expression, a source, and a destination. For each element in the source, an expression is applied and then sent to the destination – in the case of images, this runs an expression over each pixel from the source and writes the result to the destination. This is a simple way of making filters.

You might recognize selections as a limited form of a for-each loop (which imquery has too!) – this is true. That said, selections are set apart from for-each loops by my future plans for them – eventually, I’d like to let selection calculations run in parallel, and one day selections may even be able to run on the GPU (via OpenCL or similar). Additionally, they have a fairly simple syntax and should be easier to optimize than a for-each loop, as certain assumptions can be made about how selections work. I may write a followup post about this in the future.

After the code has finished running, iqc will look through the list of outputs passed on the command line and write those to the specified files.


Alright, that’s it for now. I’ll leave a quick bit of neat code you can run on your own, along with another link to the project (if you can’t build it yourself, try downloading a prebuilt release).

in input = image();
out output = image(input.w, input.h);
output: ((color.each((v) => sin(v * 2 * pi)) + 1) / 2 + {0.,1.}).clamp() from input;

A small header-only C++ ECS library

“I haven’t made a post in a while. I should work on that,” I thought to myself. “Oh, I know! I’ll talk about the little library I made a few months ago!”

Well, I made a small entity-component-system library based on the wonderful “Evolve Your Hierarchy” article. It’s a single-header library under the MIT license and is built with modern C++ (minimum support is C++11, ideal is C++14). It’s built in the style of components are data-containers while systems act on entities with specific components. It is made to use any data structure as a component (though ideally components should be plain old structs), and has the option of either using RTTI or a custom type system. There’s even a simple event system for inter-system communication.

The goals of the ECS is to be small, easy-to-use, and expressive. For example, let’s say we have this component:

Note: The code in this post assumes RTTI is enabled. If it isn’t, some additional macros must be used for each component type. See here for more information.

struct Position
    Position(float x, float y) : x(x), y(y) {}
    Position() x(0.f), y(0.f) {} // components must have a default constructor

    float x;
    float y;

Next, we need a system to act upon this component:

class GravitySystem : public EntitySystem
    GravitySystem(float amount = -9.8f)
        : gravityAmount(amount)

    virtual ~GravitySystem() {}

    virtual void tick(World* world, float deltaTime) override
        // here's the meat - for each entity with the Position component, we can run a function (or lambda)
        // in C++14, we can even use 'auto' instead of writing out each type name for parameters!
        world->each<Position>([&](Entity* ent, ComponentHandle<Position> position) {
            position->y += gravityAmount * deltaTime;

    float gravityAmount;

Now we just create the world, setup the system, and make an entity!

World* world = World::createWorld();
world->registerSystem(new GravitySystem());

// Let's make an entity!
Entity* ent = world->create();

// Give it a component - note that each entity can only have one of each type of component
ent->assign<Position>(0.f, 0.f); // you can pass arguments to the constructor of the component!

Now we can tick the world, for example in the main loop of a game.


And finally…


Here’s a short list of other features the library supports:

  • Range-based for loops for iterating over entities (including only iterating over entities with specific components)
  • Full event system
  • Custom allocators
  • More lambdas!

The library isn’t particularly built for speed, but I’ll be doing an optimization pass over the code for both performance and memory handling soon. You can find the library here.

Wake Engine Update #1

I’ve been putting in some more work recently on Wake engine. Some of the new notable features are support for loading models, a custom model format optimized for the engine (plus tools to work with the format), keyboard/mouse input, and a whole bunch of other stuff. The engine has come a long way since last time, though it still has a long way to go.


The first thing I’d like to talk about is the new model format, WMDL. While Wake supports loading most common model formats (via assimp), this generally involves copying a lot of data around. My solution is to create a format that mimics Wake’s Model class, so that the loader can read the model directly into an object the engine can use without doing any additional transformations or copies.

The format is very simple due to it mimicking the in-memory layout of the Model class. It also supports compression (via snappy) which in many cases actually speeds up model loading as performance is mostly contingent on disk IO, so smaller files = less time spent waiting for the disk. Each WMDL model can contain a list of materials, defined by a base material and list of parameters (see below), and a list of meshes, defined by a list of vertices, normals, and indices.


There is now a basic material system present in the engine. Materials are objects of the Material class. A material contains a single shader which instructs the graphics card how to render it along with a list of default parameters and textures. Materials can be copied in order to modify parameters without affecting the base material. Additionally, materials generally have a type name which is used in order to reference the material from WMDL files.

A material can be created on the fly with, but generally you want to define a lua file which sets up a material and returns it. Here’s the source for the materials.demo_lighting material as an example.

-- First we need to create the shader
local shader =
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 texCoords;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 transform;

out vec3 outNormal;
out vec2 outTexCoords;

void main()
    gl_Position = projection * view * transform * vec4(position, 1.0);
    outNormal = normal;
    outTexCoords = texCoords;
#version 330 core
in vec3 outNormal;
in vec2 outTexCoords;

out vec4 outColor;

uniform sampler2D tex1;
uniform vec3 lightColor;
uniform vec3 lightDirection;
uniform float lightAmbience;
uniform float minBrightness;

void main()
    vec4 texColor = texture(tex1, outTexCoords);

    float diffuseIntensity = max(minBrightness, dot(normalize(outNormal), -normalize(lightDirection)));
    outColor = vec4(lightColor, 1.0) * vec4(lightColor * (lightAmbience * diffuseIntensity) * texColor.rgb, 1.0);

-- Next, we create and set default parameters on the material
local material =
material:setVec3('lightColor', {1, 1, 1})
material:setVec3('lightDirection', {1, -1, 0.6})
material:setFloat('lightAmbience', 0.8)
material:setFloat('minBrightness', 0.15)

-- Finally, return the material from the file
return material

By making each material its own file, the material can be retrieved by simply require-ing the file. A utility function, assets.loadMaterials, is provided which will actually do just that. You pass in a model that was loaded from disk and it will automatically load the materials referenced by it and copy them into the model.

A Simple Demo

There’s actually a lot of other new stuff in Wake (a simple Camera class implemented in lua as an example, global materials, a lua-based event system to complement native events, etc…), so here’s a simple demo that shows off some of the new features.

-- Import some files
local Camera = require('camera')
local config = require('config.cfg') -- this is used for setting mouse sensitivity later

-- Load the Sponza Atrium model, in wmdl format. The WMDL was created by running the wmdl tool on sponza.obj: "wake -x wmdl assets/models/sponza.obj assets/models/sponza.wmdl"
-- The WMDL was further modified to use the demo_lighting material with the modify-model tool.
local model = assets.loadModel('assets/models/sponza.wmdl')
if obj == nil then
  print('Unable to load model.')

-- Load materials into the model. Models that have been loaded with assets.loadModel will have placeholder materials that don't do anything,
-- so we need to tell the engine to load the actual materials.

-- Set the background color of the window to white
engine.setClearColor(1, 1, 1, 1)

-- here are the variables for our camera
local cam =, 0, 0)) -- we pass the initial position for the camera
local speed = 1 -- the normal movement speed for the camera
local fastSpeed = 2 -- the fast movement speed for the camera

-- every "tick" of the main loop, this event is called
  local moveSpeed = speed
  -- check if we want to move fast
  if input.getKey(input.key.LeftShift) == input.action.Press then
    moveSpeed = fastSpeed

  -- Movement for WASD plus down/up with Q/E
  if input.getKey(input.key.W) == input.action.Press then
      cam:moveForward(moveSpeed * dt)

  if input.getKey(input.key.S) == input.action.Press then
      cam:moveForward(-moveSpeed * dt)

  if input.getKey(input.key.A) == input.action.Press then
      cam:moveRight(-moveSpeed * dt)

  if input.getKey(input.key.D) == input.action.Press then
      cam:moveRight(moveSpeed * dt)

  if input.getKey(input.key.Q) == input.action.Press then
      cam:moveUp(-moveSpeed * dt)

  if input.getKey(input.key.E) == input.action.Press then
      cam:moveUp(moveSpeed * dt)

  -- The current way to pass parameters to a model to use for drawing is to create an empty material and
  -- assign parameters. These parameters will be passed to the materials that the model uses. This will likely
  -- be changed out for a separate parameter system at some point.

  -- Here we use this to pass a scaling operation as the model's transform, as the model is way too big.
  local params =
  params:setMatrix4("transform", math.scale{0.002, 0.002, 0.002}) -- note the curly brackets {} instead of parenthesis () for math.scale, since we are passing a single Vector3 as opposed to 3 arguments.
  -- the sample camera class stores the view and projection matrix in a material by "using" the camera

  -- finally, draw the model

-- We use the key event in order to stop the engine if the user presses escape
input.event.key:bind(function(key, action)
  if key == input.key.Escape and action == input.action.Release then

-- This is the code used to implement mouse-look for the camera
local lastX = 0
local lastY = 0
local firstMouse = true -- we use this variable so we don't suddenly jump the camera view on the first frame

input.setCursorMode(input.cursorMode.Disabled) -- this hides the cursor and locks it to the center of the screen

-- this event is called whenever the cursor is moved
input.event.cursorPos:bind(function(x, y)
  if firstMouse then
    firstMouse = false
    lastX = x
    lastY = y

  -- a bit of math to figure out how much to rotate, based on the mouseSensitivity variable in the engine config
  local xOffset = (x - lastX) * config.input.mouseSensitivity
  local yOffset = (y - lastY) * config.input.mouseSensitivity

  -- add the rotation to the camera's rotation
  cam:addRotation(, xOffset, yOffset))

  lastX = x
  lastY = y

A game engine with Lua & C++

For a while now I’ve been working on writing a new game engine in C++. It has been a long process, and I’ve restarted it a number of times with each successive time getting closer and closer to a usable engine. Getting caught in a loop of writing and rewriting a project sucks, so I’m forcing myself to stick with the current iteration.

This is the Wake engine. Right now it is at the stage that I can create shaders and draw meshes. The engine itself is largely in C++, with scripting in Lua. Getting Lua scripts to work how I wanted was pretty time consuming: I had to bind a large amount of classes in order to allow the kind of math you need in games to be performed from within the scripts. I managed to bind all the vector and matrix classes from GLM along with the quaternion class.

I’m not going to go too much into the design of the engine just yet, as it is still in flux, but here’s the bit of code that it took to render the above video:

-- First we need to create the shader to display the cube.
-- Wake doesn't currently have any built in shaders, so we need to define our own here.
-- The first argument to is the vertex shader. The second is the fragment shader.
local shader =
#version 330 core
layout (location = 0) in vec3 position;

uniform mat4 view;
uniform mat4 projection;
uniform mat4 model;

void main()
  gl_Position = projection * view * model * vec4(position, 1.0);
#version 330 core
uniform float time;

out vec4 outColor;

void main()
  outColor = vec4((sin(time) + 1) / 2, (cos(time) + 1) / 2, ((sin(time) + 1) / 2 + (cos(time) + 1) / 2) / 2, 1.0);

local shaderTime = shader:getUniform("time")
local shaderView = shader:getUniform("view")
local shaderProj = shader:getUniform("projection")
local shaderModel = shader:getUniform("model")

-- There's no mesh generation functions at the moment, so we have to
-- define the cube manually for now. The first argument is the list of vertices,
-- the second is the list of indices.
local mesh =
{{-1, -1, 1},{1, -1, 1},{1, 1, 1},{-1, 1, 1},{-1, -1, -1},{1, -1, -1},{1, 1, -1}{-1, 1, -1}
  2, 1, 0,
  0, 3, 2,

  6, 5, 1,
  1, 2, 6,

  5, 6, 7,
  7, 4, 5,

  3, 0, 4,
  4, 7, 3,

  1, 5, 4,
  4, 0, 1,

  6, 2, 3,
  3, 7, 6

-- White background (r, g, b, a)
engine.setClearColor(1, 1, 1, 1)

local view = math.lookAt({4, 3, 3}, {0, 0, 0}, {0, 1, 0})
local projection = math.perspective(math.radians(45), 800 / 600, 0.1, 1000)

-- Events in wake are bound using the 'bind' function
  -- Use the shader

  local rot = engine.getTime() / 2
  local mat = math.rotate(rot, {1, 0, 0})
  mat = math.translate(mat, {math.sin(engine.getTime() / 2) * 2, 0, 0})

  -- Draw the mesh

The Wake engine is released under the MIT license, and source code can be found here.

More Complex Dungeon Generation

Oops! The links to code in this post are dead, I’ll see about getting them back up when I have time. The rest of the post is fine though 🙂

Note that each image in this post is from a separate run of the generator.

The code written and released with this article is under the MIT license.


This is going to be a quick article about procedural dungeon generation in unreal engine. I recently came across this article on Gamasutra, which describes a really cool dungeon generation algorithm used by TinyKeep. I decided I wanted to try implementing it in Unreal Engine 4, so here are some notes on my attempt.


This is the first step of the generation process: Create a large number of randomly sized regions within a circle. Note that they are green in this image as green represents “default” regions, which don’t yet have a purpose.


Next comes separation. I don’t want any of the regions overlapping, so I used a simple separation steering behavior. The Gamasutra article uses a physics engine to do this part, but I see that as unnecessary and over the top for what could be a very simple algorithm. Using a physics engine also means you don’t get nearly as much control over the results, so I went with the original way TinyKeep used.

Here are the basics of the separation algorithm I used. The main thing to note is that I only apply separation to a region based on what other regions are overlapping. In my actual code, there’s also a bit that keeps track of how many “ticks” separation has occurred for (how many times it has gone through the while loop), and if that number gets too large then it breaks out of the loop so that we don’t end up with an endless loop situation. There’s some inefficiency in the code as the number of checks for separation grows exponentially with the number of regions, and it could easily benefit from some sort of spatial partitioning, but it still runs fairly fast and doesn’t need to be too efficient since this is meant to run during a loading screen as opposed to while the game is being played.


Here’s where I start assigning purposes to different regions. I find the average size of the regions, then find the set of all regions above that size * a multiplier. I mark all regions in that set as “main” regions, which are highlighted in red. These will be used to figure out how the hallways of the dungeon should be generated.


It may be a bit hard to see in the above image, but there are blue lines creating a graph of all main regions. This graph is the Delaunay triangulation of the centers of each main region. This will be used in the next few steps to create hallways. To actually generate the Delaunay triangulation, I used this code, modified to use Unreal Engine’s constructs (such as FVector2D).


Now it is really hard to see, but the blue lines are still there. There are quite a few less of them, and that is because this is a minimum spanning tree of the previous graph. This guarantees that all main rooms are reachable in the graph, while also not duplicating any paths. I used Prim’s algorithm since it was simple to implement. Here is my implementation, and again it would probably benefit from some better data structures but it is fast enough for my purposes.


The problem with the MST is that it makes a fairly boring dungeon where every path leads to a dead end. It works as a starting point, but there should be some cycles in the main region graph. As such, I randomly choose some edges from the original triangulation graph that aren’t a part of the MST and add them to the final graph.


Here you will notice that all of the green regions are now black. This is because I’ve set their type to “none”. Regions of type “none” will be deleted completely from the dungeon when the algorithm completes, so I set everything that isn’t a main region to “none”. You’ll also notice that the blue lines now create corners and always follow the X or Y axis. This is because they will be used to create the final hallways that connect the main rooms. The algorithm to figure out how to lay out the lines for hallways is pretty simply and explained in the Gamasutra article, but here’s my implementation.


Now I find the intersection between each of the blue lines and the unused regions, then add them back in to the dungeon. They are highlighted in pink here because they are marked as hallways. You will notice that doing this still doesn’t connect all regions together, which is why the next step happens…


Look at all those tiny pink squares! I do a check around each line to find any space that isn’t taken up by any region, then fill that in with a small hallway region. Ideally this would have another step which optimized regions of the same type that are adjacent into a single region, but that’s a topic for another day.


The final two steps are show here: The first is to remove any unused regions, and the second is to “tag” regions as needed. Regions can have tags which is used to define what a region is used for. Right now, there are only three tags defined: None, Start, and Exit. None is applied to everything by default, while Start and Exit are applied to main regions. Right now the generator chooses a random main region for the Start tag, and then finds the furthest main region from that one and uses it as the Exit tag.

That’s it for the dungeon generation algorithm. All that is left is to actually spawn geometry, which is something I am still working on.

I have plans to release the generation code as open source in the near future, but not until I’ve finished tuning it a bit. Thanks for reading!

A new version of hvzsite

This week a new version of went live. I’ve been running RIT’s Humans vs Zombies website since the Fall of 2014, and this is the second major upgrade to the website that I’ve made.

A humans vs zombies website isn’t just there to market the game; it actually plays a part in the game itself. Players have to register when they are tagged (or tag someone) by typing both their unique id along with whomever tagged them into the “infect” page on the website. This allows us to keep track of who is still alive and who is a zombie. The website also handles dispersing mission information to each team, antiviruses, and tracks the location of each infection.

The newest incarnation of the website is written in NodeJS using Sails for the server, which provides a simple REST API, and Ember for the client. The entire website is open source under the MIT license (check it out at GitHub).

The new version of the website is actually a bit more generic than the last one: Any references to RIT’s specific game have been put in configuration files so that they can easily be swapped out for other names. I’m still working on making game rules configurable (and there’s no “starve timer” yet which I know some games use). I’m always open to suggestions (and pull requests) on how to make it better!

The other major new part of the website is that absolutely everything is done over a REST API. The built in client uses this API to communicate for everything except authentication. The old version of the website exposed almost everything as a REST endpoint, but none of the admin panel was accessible from REST. Now, everything can be done over REST.

My main plans going forward with this incarnation of the website are to A. make it generic enough to support any game run anywhere and B. clean up the UI. On that second point, I’m really not a designer. I did my best for the website but I’m the last person you’d ask for making a really good looking website. That isn’t a huge deal, though, since the current website looks decent enough and is usable.

Again, the GitHub is here, and the source is under the MIT license. For reference, the old PHP version of the website is here.

Developing Zombie Wave Survival in 1 Week

For the past week, I’ve been working on a small project. I’ve been wanting to learn how to do multiplayer in unreal engine, but every time I tried to learn it I’d get stuck on the fact that there is a lack of good examples. Well, the wonderful Tom Looman released an example project for a third person zombie survival game a little while back, so I decided to try using it as a reference to create a simple game.

Zom NineRooms Editor 1

My game is a first person wave survival game with full coop multiplayer support, along with a simple lobby system. The code is heavily based on Tom Looman’s, however it has been adapted for a first person project and I avoided using his weapon system in favor of something that would allow me a bit more control from blueprints.

Zom In-Game NineRooms 1

Maps define a list of waves, with each wave containing a list of what can spawn and how many. The game then chooses random spawns for each enemy, and throws them at the player. There’s a money system which allows you to buy ammo, health, and unlock new areas, and there’s a scoreboard which shows your stats compared to your friends.

Zom In-Game NineRooms 2

Zom In-Game NineRooms 3

I’m not completely sure what I’ll be doing with this project, or how much further I’ll be going. I’ll likely post a video of it when I’ve gotten some more done.