Wake Engine Update #1

I’ve been putting in some more work recently on Wake engine. Some of the new notable features are support for loading models, a custom model format optimized for the engine (plus tools to work with the format), keyboard/mouse input, and a whole bunch of other stuff. The engine has come a long way since last time, though it still has a long way to go.


The first thing I’d like to talk about is the new model format, WMDL. While Wake supports loading most common model formats (via assimp), this generally involves copying a lot of data around. My solution is to create a format that mimics Wake’s Model class, so that the loader can read the model directly into an object the engine can use without doing any additional transformations or copies.

The format is very simple due to it mimicking the in-memory layout of the Model class. It also supports compression (via snappy) which in many cases actually speeds up model loading as performance is mostly contingent on disk IO, so smaller files = less time spent waiting for the disk. Each WMDL model can contain a list of materials, defined by a base material and list of parameters (see below), and a list of meshes, defined by a list of vertices, normals, and indices.


There is now a basic material system present in the engine. Materials are objects of the Material class. A material contains a single shader which instructs the graphics card how to render it along with a list of default parameters and textures. Materials can be copied in order to modify parameters without affecting the base material. Additionally, materials generally have a type name which is used in order to reference the material from WMDL files.

A material can be created on the fly with Material.new(), but generally you want to define a lua file which sets up a material and returns it. Here’s the source for the materials.demo_lighting material as an example.

-- First we need to create the shader
local shader = Shader.new(
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 texCoords;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 transform;

out vec3 outNormal;
out vec2 outTexCoords;

void main()
    gl_Position = projection * view * transform * vec4(position, 1.0);
    outNormal = normal;
    outTexCoords = texCoords;
#version 330 core
in vec3 outNormal;
in vec2 outTexCoords;

out vec4 outColor;

uniform sampler2D tex1;
uniform vec3 lightColor;
uniform vec3 lightDirection;
uniform float lightAmbience;
uniform float minBrightness;

void main()
    vec4 texColor = texture(tex1, outTexCoords);

    float diffuseIntensity = max(minBrightness, dot(normalize(outNormal), -normalize(lightDirection)));
    outColor = vec4(lightColor, 1.0) * vec4(lightColor * (lightAmbience * diffuseIntensity) * texColor.rgb, 1.0);

-- Next, we create and set default parameters on the material
local material = Material.new()
material:setVec3('lightColor', {1, 1, 1})
material:setVec3('lightDirection', {1, -1, 0.6})
material:setFloat('lightAmbience', 0.8)
material:setFloat('minBrightness', 0.15)
material:setMatrix4('projection', Matrix4x4.new())
material:setMatrix4('view', Matrix4x4.new())
material:setMatrix4('transform', Matrix4x4.new())

-- Finally, return the material from the file
return material

By making each material its own file, the material can be retrieved by simply require-ing the file. A utility function, assets.loadMaterials, is provided which will actually do just that. You pass in a model that was loaded from disk and it will automatically load the materials referenced by it and copy them into the model.

A Simple Demo

There’s actually a lot of other new stuff in Wake (a simple Camera class implemented in lua as an example, global materials, a lua-based event system to complement native events, etc…), so here’s a simple demo that shows off some of the new features.

-- Import some files
local Camera = require('camera')
local config = require('config.cfg') -- this is used for setting mouse sensitivity later

-- Load the Sponza Atrium model, in wmdl format. The WMDL was created by running the wmdl tool on sponza.obj: "wake -x wmdl assets/models/sponza.obj assets/models/sponza.wmdl"
-- The WMDL was further modified to use the demo_lighting material with the modify-model tool.
local model = assets.loadModel('assets/models/sponza.wmdl')
if obj == nil then
  print('Unable to load model.')

-- Load materials into the model. Models that have been loaded with assets.loadModel will have placeholder materials that don't do anything,
-- so we need to tell the engine to load the actual materials.

-- Set the background color of the window to white
engine.setClearColor(1, 1, 1, 1)

-- here are the variables for our camera
local cam = Camera.new(Vector3.new(-2.5, 0, 0)) -- we pass the initial position for the camera
local speed = 1 -- the normal movement speed for the camera
local fastSpeed = 2 -- the fast movement speed for the camera

-- every "tick" of the main loop, this event is called
  local moveSpeed = speed
  -- check if we want to move fast
  if input.getKey(input.key.LeftShift) == input.action.Press then
    moveSpeed = fastSpeed

  -- Movement for WASD plus down/up with Q/E
  if input.getKey(input.key.W) == input.action.Press then
      cam:moveForward(moveSpeed * dt)

  if input.getKey(input.key.S) == input.action.Press then
      cam:moveForward(-moveSpeed * dt)

  if input.getKey(input.key.A) == input.action.Press then
      cam:moveRight(-moveSpeed * dt)

  if input.getKey(input.key.D) == input.action.Press then
      cam:moveRight(moveSpeed * dt)

  if input.getKey(input.key.Q) == input.action.Press then
      cam:moveUp(-moveSpeed * dt)

  if input.getKey(input.key.E) == input.action.Press then
      cam:moveUp(moveSpeed * dt)

  -- The current way to pass parameters to a model to use for drawing is to create an empty material and
  -- assign parameters. These parameters will be passed to the materials that the model uses. This will likely
  -- be changed out for a separate parameter system at some point.

  -- Here we use this to pass a scaling operation as the model's transform, as the model is way too big.
  local params = Material.new()
  params:setMatrix4("transform", math.scale{0.002, 0.002, 0.002}) -- note the curly brackets {} instead of parenthesis () for math.scale, since we are passing a single Vector3 as opposed to 3 arguments.
  -- the sample camera class stores the view and projection matrix in a material by "using" the camera

  -- finally, draw the model

-- We use the key event in order to stop the engine if the user presses escape
input.event.key:bind(function(key, action)
  if key == input.key.Escape and action == input.action.Release then

-- This is the code used to implement mouse-look for the camera
local lastX = 0
local lastY = 0
local firstMouse = true -- we use this variable so we don't suddenly jump the camera view on the first frame

input.setCursorMode(input.cursorMode.Disabled) -- this hides the cursor and locks it to the center of the screen

-- this event is called whenever the cursor is moved
input.event.cursorPos:bind(function(x, y)
  if firstMouse then
    firstMouse = false
    lastX = x
    lastY = y

  -- a bit of math to figure out how much to rotate, based on the mouseSensitivity variable in the engine config
  local xOffset = (x - lastX) * config.input.mouseSensitivity
  local yOffset = (y - lastY) * config.input.mouseSensitivity

  -- add the rotation to the camera's rotation
  cam:addRotation(Vector3.new(0, xOffset, yOffset))

  lastX = x
  lastY = y

A game engine with Lua & C++

For a while now I’ve been working on writing a new game engine in C++. It has been a long process, and I’ve restarted it a number of times with each successive time getting closer and closer to a usable engine. Getting caught in a loop of writing and rewriting a project sucks, so I’m forcing myself to stick with the current iteration.

This is the Wake engine. Right now it is at the stage that I can create shaders and draw meshes. The engine itself is largely in C++, with scripting in Lua. Getting Lua scripts to work how I wanted was pretty time consuming: I had to bind a large amount of classes in order to allow the kind of math you need in games to be performed from within the scripts. I managed to bind all the vector and matrix classes from GLM along with the quaternion class.

I’m not going to go too much into the design of the engine just yet, as it is still in flux, but here’s the bit of code that it took to render the above video:

-- First we need to create the shader to display the cube.
-- Wake doesn't currently have any built in shaders, so we need to define our own here.
-- The first argument to Shader.new is the vertex shader. The second is the fragment shader.
local shader = Shader.new(
#version 330 core
layout (location = 0) in vec3 position;

uniform mat4 view;
uniform mat4 projection;
uniform mat4 model;

void main()
  gl_Position = projection * view * model * vec4(position, 1.0);
#version 330 core
uniform float time;

out vec4 outColor;

void main()
  outColor = vec4((sin(time) + 1) / 2, (cos(time) + 1) / 2, ((sin(time) + 1) / 2 + (cos(time) + 1) / 2) / 2, 1.0);

local shaderTime = shader:getUniform("time")
local shaderView = shader:getUniform("view")
local shaderProj = shader:getUniform("projection")
local shaderModel = shader:getUniform("model")

-- There's no mesh generation functions at the moment, so we have to
-- define the cube manually for now. The first argument is the list of vertices,
-- the second is the list of indices.
local mesh = Mesh.new(
  Vertex.new{-1, -1, 1},
  Vertex.new{1, -1, 1},
  Vertex.new{1, 1, 1},
  Vertex.new{-1, 1, 1},
  Vertex.new{-1, -1, -1},
  Vertex.new{1, -1, -1},
  Vertex.new{1, 1, -1}
  Vertex.new{-1, 1, -1}
  2, 1, 0,
  0, 3, 2,

  6, 5, 1,
  1, 2, 6,

  5, 6, 7,
  7, 4, 5,

  3, 0, 4,
  4, 7, 3,

  1, 5, 4,
  4, 0, 1,

  6, 2, 3,
  3, 7, 6

-- White background (r, g, b, a)
engine.setClearColor(1, 1, 1, 1)

local view = math.lookAt({4, 3, 3}, {0, 0, 0}, {0, 1, 0})
local projection = math.perspective(math.radians(45), 800 / 600, 0.1, 1000)

-- Events in wake are bound using the 'bind' function
  -- Use the shader

  local rot = engine.getTime() / 2
  local mat = math.rotate(rot, {1, 0, 0})
  mat = math.translate(mat, {math.sin(engine.getTime() / 2) * 2, 0, 0})

  -- Draw the mesh

The Wake engine is released under the MIT license, and source code can be found here.

More Complex Dungeon Generation

Note that each image in this post is from a separate run of the generator.

The code written and released with this article is under the MIT license.


This is going to be a quick article about procedural dungeon generation in unreal engine. I recently came across this article on Gamasutra, which describes a really cool dungeon generation algorithm used by TinyKeep. I decided I wanted to try implementing it in Unreal Engine 4, so here are some notes on my attempt.


This is the first step of the generation process: Create a large number of randomly sized regions within a circle. Note that they are green in this image as green represents “default” regions, which don’t yet have a purpose.


Next comes separation. I don’t want any of the regions overlapping, so I used a simple separation steering behavior. The Gamasutra article uses a physics engine to do this part, but I see that as unnecessary and over the top for what could be a very simple algorithm. Using a physics engine also means you don’t get nearly as much control over the results, so I went with the original way TinyKeep used.

Here are the basics of the separation algorithm I used. The main thing to node is that I only apply separation to a region based on what other regions are overlapping. In my actual code, there’s also a bit that keeps track of how many “ticks” separation has occurred for (how many times it has gone through the while loop), and if that number gets too large then it breaks out of the loop so that we don’t end up with an endless loop situation. There’s some inefficiency in the code as the number of checks for separation grows exponentially with the number of regions, and it could easily benefit from some sort of spatial partitioning, but it still runs fairly fast and doesn’t need to be too efficient since this is meant to run during a loading screen as opposed to while the game is being played.


Here’s where I start assigning purposes to different regions. I find the average size of the regions, then find the set of all regions above that size * a multiplier. I mark all regions in that set as “main” regions, which are highlighted in red. These will be used to figure out how the hallways of the dungeon should be generated.


It may be a bit hard to see in the above image, but there are blue lines creating a graph of all main regions. This graph is the Delaunay triangulation of the centers of each main region. This will be used in the next few steps to create hallways. To actually generate the Delaunay triangulation, I used this code, modified to use Unreal Engine’s constructs (such as FVector2D).


Now it is really hard to see, but the blue lines are still there. There are quite a few less of them, and that is because this is a minimum spanning tree of the previous graph. This guarantees that all main rooms are reachable in the graph, while also not duplicating any paths. I used Prim’s algorithm since it was simple to implement. Here is my implementation, and again it would probably benefit from some better data structures but it is fast enough for my purposes.


The problem with the MST is that it makes a fairly boring dungeon where every path leads to a dead end. It works as a starting point, but there should be some cycles in the main region graph. As such, I randomly choose some edges from the original triangulation graph that aren’t a part of the MST and add them to the final graph.


Here you will notice that all of the green regions are now black. This is because I’ve set their type to “none”. Regions of type “none” will be deleted completely from the dungeon when the algorithm completes, so I set everything that isn’t a main region to “none”. You’ll also notice that the blue lines now create corners and always follow the X or Y axis. This is because they will be used to create the final hallways that connect the main rooms. The algorithm to figure out how to lay out the lines for hallways is pretty simply and explained in the Gamasutra article, but here’s my implementation.


Now I find the intersection between each of the blue lines and the unused regions, then add them back in to the dungeon. They are highlighted in pink here because they are marked as hallways. You will notice that doing this still doesn’t connect all regions together, which is why the next step happens…


Look at all those tiny pink squares! I do a check around each line to find any space that isn’t taken up by any region, then fill that in with a small hallway region. Ideally this would have another step which optimized regions of the same type that are adjacent into a single region, but that’s a topic for another day.


The final two steps are show here: The first is to remove any unused regions, and the second is to “tag” regions as needed. Regions can have tags which is used to define what a region is used for. Right now, there are only three tags defined: None, Start, and Exit. None is applied to everything by default, while Start and Exit are applied to main regions. Right now the generator chooses a random main region for the Start tag, and then finds the furthest main region from that one and uses it as the Exit tag.

That’s it for the dungeon generation algorithm. All that is left is to actually spawn geometry, which is something I am still working on.

I have plans to release the generation code as open source in the near future, but not until I’ve finished tuning it a bit. Thanks for reading!

A new version of hvzsite

This week a new version of https://hvz.rit.edu/ went live. I’ve been running RIT’s Humans vs Zombies website since the Fall of 2014, and this is the second major upgrade to the website that I’ve made.

A humans vs zombies website isn’t just there to market the game; it actually plays a part in the game itself. Players have to register when they are tagged (or tag someone) by typing both their unique id along with whomever tagged them into the “infect” page on the website. This allows us to keep track of who is still alive and who is a zombie. The website also handles dispersing mission information to each team, antiviruses, and tracks the location of each infection.

The newest incarnation of the website is written in NodeJS using Sails for the server, which provides a simple REST API, and Ember for the client. The entire website is open source under the MIT license (check it out at GitHub).

The new version of the website is actually a bit more generic than the last one: Any references to RIT’s specific game have been put in configuration files so that they can easily be swapped out for other names. I’m still working on making game rules configurable (and there’s no “starve timer” yet which I know some games use). I’m always open to suggestions (and pull requests) on how to make it better!

The other major new part of the website is that absolutely everything is done over a REST API. The built in client uses this API to communicate for everything except authentication. The old version of the website exposed almost everything as a REST endpoint, but none of the admin panel was accessible from REST. Now, everything can be done over REST.

My main plans going forward with this incarnation of the website are to A. make it generic enough to support any game run anywhere and B. clean up the UI. On that second point, I’m really not a designer. I did my best for the website but I’m the last person you’d ask for making a really good looking website. That isn’t a huge deal, though, since the current website looks decent enough and is usable.

Again, the GitHub is here, and the source is under the MIT license. For reference, the old PHP version of the website is here.

Developing Zombie Wave Survival in 1 Week

For the past week, I’ve been working on a small project. I’ve been wanting to learn how to do multiplayer in unreal engine, but every time I tried to learn it I’d get stuck on the fact that there is a lack of good examples. Well, the wonderful Tom Looman released an example project for a third person zombie survival game a little while back, so I decided to try using it as a reference to create a simple game.

Zom NineRooms Editor 1

My game is a first person wave survival game with full coop multiplayer support, along with a simple lobby system. The code is heavily based on Tom Looman’s, however it has been adapted for a first person project and I avoided using his weapon system in favor of something that would allow me a bit more control from blueprints.

Zom In-Game NineRooms 1

Maps define a list of waves, with each wave containing a list of what can spawn and how many. The game then chooses random spawns for each enemy, and throws them at the player. There’s a money system which allows you to buy ammo, health, and unlock new areas, and there’s a scoreboard which shows your stats compared to your friends.

Zom In-Game NineRooms 2

Zom In-Game NineRooms 3

I’m not completely sure what I’ll be doing with this project, or how much further I’ll be going. I’ll likely post a video of it when I’ve gotten some more done.

HFOSS: Sugarizer

Originally this was supposed to be on doing the smoke test for the XO laptop, but due to a lack of working XOs, I had to try using Sugarizer to play around in the interface instead.

Sugarizer is a web-based version of the Sugar interface, which normally runs on the XO laptops. It’s… weird. Now, I haven’t really had much of an experience with the real Sugar interface, but Sugarizer has some problems. It’s not really Sugar. It’s a set of small web applications that look like they are part of the Sugar interface. Unfortunately, it’s kind of slow, not in that loading takes a long time but there are little to no animations so everything seems to take a while. Additionally, the usefulness of most of the apps is questionable at best; there is no way that Sugarizer has the same amount of content that normal Sugar does, and it doesn’t seem to support running the same apps (you’d have to rewrite them in javascript).

What does, however, make Sugarizer interesting is that you can run a server for it, which contains all of the applications, and then use a multitude of clients. There are clients for web (which is the one I’ve been playing with), Android, iOS, and Chrome Web Store (which is, effectively, web). I think this is an interesting option which would allow commodity hardware running other operating systems to gain the benefits of Sugar without needing XOs themselves (in case you need to run applications outside of the normal XO environment). My problem, however, still stands in that there doesn’t seem to be the application support that it needs to be as useful as Sugar itself (at that point, why not try using Sugar on a stick).

HFOSS: What Is Open Source and How Does It Work?

For HFOSS, I had to read a chapter (3) of Steve Weber’s The Success of Open Source. It gives a nice shortened history of the creation of Linux as an anecdote for how open source software projects grow. The chapter talks about how OSS projects are structured, the upsides and downsides of letting a community govern a large project, and who participates in the projects (hint: it’s not just the developers, you also have everyone who uses the software). Some studies are shown that try to dissect what kind of people contributed to the Linux project (by country, by what email they use, etc). The chapter goes on to explain the goals of OSS and how the projects are run: Don’t reinvent the wheel (share what you made so others can benefit), create opportunities for developers to participate in projects, and to develop solutions to problems by distributing the work.

I’d say that this chapter gives a good overview of how OSS is created and why it exists, though I would say that the chapter is a bit outdated (it mentions SourceForge as a collaboration site that is commonly used, but nowadays SF is the last place you would want to host a project). I’d recommend reading over the chapter if you don’t have much knowledge about open source software, but at the same time if you already actively participate in OSS communities there is probably not much within the chapter that you don’t already know.