In my last post I showed off how my cinematic system works. The result was this nice little gif:
The caption said, "The navigation system is a little funky and needs a rewrite"... Well, now I've done a large refactor of the navigation system and it works much better.
Before the Refactor
First, a bit on how things used to work. The pathfinding algorithm used is a very simple implementation of A*. When an agent needs to query the navigation system, the world is broken down into a grid with each grid cell being a size based on the agent's size. From there, the navigation system can query the world as necessary to see if a space is blocked or not.
Here's the same scene as before with the grid visible:
There are four types of cells here:
- Green cells are part of the final path
- Red cells are blocked by something
- Yellow cells were candidates to be chosen for the path (not blocked and in the closed list)
- White cells weren't looked at (in the open list)
There are some fairly obvious issues with the above pathfinding system - the first is that agent movement is fairly jerky - this is because the path following algorithm is very naive (we'll come back to this later as it isn't directly related to the navigation system).
The bigger issue is towards the end of the gif when the two agents are trying to path to each other. In order to facilitate generating paths to moving objects (and to account for dynamic obstacles) the "navigate to" action used by the cinematic system (along with other systems) re-queries the navigation system for a new path every so often. That time is set fairly high by default - 2 seconds. This is because any navigation query happens in full immediately and if there are enough of them in a single frame (or the query is sufficiently large) it can tank performance.
Before anyone asks - no, the answer was not to make things multi-threaded (navigation relies on the physics engine which is not thread-safe).
The New Navigation System
First, here's how things look with the new navigation system:
No more going up and left over and over again - the agents walk in (mostly) straight lines (again, we'll come back to this)! No more weird dancing around when they path to each other!
Here's the same scene again with debugging on:
Probably the most important change is that navigation is now updating almost every frame. And, while it might not be obvious here, there's little risk for performance problems. Why? Because pathfinding doesn't need to complete on the same frame the navigation system is queried!
Here's the new flow for how navigation works:
- Agent requests a "token" from the navigation system
- Agent updates token with pathfinding parameters (agent size, start/end, etc)
- (every frame) navigation system loops through the list of active tokens and processes pathfinding for each until it a certain "navigation budget" is reached
- (every frame) agent checks for a new path from the token
As long as an agent hasn't cancelled its navigation token the system will continue to update it with new pathfinding results as often as possible. This lets agents react to changes in the environment in real-time without having to worry about overburdening the navigation system each frame.
As for the "navigation budget", this can theoretically be anything - I think ideally there'd be a set time constraint where the navigation system can do its thing but right now this is just a hard limit of how many physics queries can happen in a single frame - if a pathfinding result is incomplete when the budget is reached the navigation system pauses and then picks up where it left off next frame.
There are still some improvements to be made - primarily in the pathfinding algorithm itself. There are a bunch of limitations with the current implementation:
- If an agent is in an open (infinite) space but trying to path somewhere that isn't possible to reach, it will infinitely expand the pathfinding grid. Currently this is mitigated by limiting how far the pathfinder can search but a better solution would be to path from both sides at the same time - if either side fails to find a path then we know that side is in an enclosed space and a path isn't possible.
- Incomplete paths aren't possible - ideally we'd be able to at least have paths that get close to a target even if they can't reach the target.
These are problems to tackle in the future - for now I think the system works pretty well.
Better Path Following
The other big piece of this was rewriting how path following works. Originally path following was very naive - it would internally keep track of the next point on the path to head towards and would try to hit that point exactly, even if it ended up overshooting the point at first. Another side-effect was that diagonal movement resulted in near constant horizontal-vertical direction changes.
My solution was twofold - the first was to swap over to a path following steering behavior which uses a "predicted" position to know where to head towards next and also to allow a bit of leeway rather than strictly adhering to the path.
The second part of the solution focused on making movement a bit more natural - rather than allowing instant turns I now have agents interpolate between their current movement direction and their desired direction. This is somewhat of a halfway between the instantaneous movement that is used for the player and a more classical steering agent that uses forces to control movement. Here's what that looks like:
The orange line is the "desired" movement direction as calculated by the path-following behavior. The green line is the "real" movement direction - you'll notice that the orange line is constantly changing while the green line changes slowly. The algorithm still needs some tweaking but the result it gives is to my eyes pretty good.
That's it for now, I'll leave off with a gif of a bunch of agents all trying to navigate to the player at the same time to attack them: