delphic.me.uk Navigate

Blog

Seekers - Ability System

A major aspect of ARPGs is the abilities of the playable characters and of their enemies. A warrior spinning both blades around themselves pushing back enemies, a mage firing a series of explosions forward damaging friend and foe alike, a boss striking the floor in rage causing the whole environment to shake.

There is usually a lot of these abilities, their effects wide ranging and they're subject to a lot of iteration and tweaking, so if you're making an ARPG you're going to ideally want a system that makes it easy to: specify many types of effects, create new abilities quickly, tweak existing abilities at run time, all with minimal code duplication and easy to extend with new features.

The basis of this system in Seekers is a set of scriptable objects of different types which specify the various aspects of an ability. Scriptable objects fulfil the "as many as you want", the "easy to create" and the "easy to tweak" requirements. The create asset menu attribute allows you create assets to represent your abilities (or their sub-elements) from scratch, being unity asset files you can serialize and link them from character definitions or scripts and you can duplicate existing abilities to make similar abilities or variations. You can also change their values at run time and see changes immediately without a recompile.

The "minimal code duplication" and "easy to extend" requirements can only be solved by making good choices when designing this configuration data. Well designed data structures where you don't need to duplicate information naturally leads to implementation code that is self-contained and modular, and if you're careful about your semantics (i.e. making sure your data structures accurately represent the problem they're trying to solve) extensibility tends to take care of itself.

So the focus of this post is not how to code the particulars of implementation but how to structure the data by which you specify the abilities, as modular specification naturally leads to modular code. Now that's all very high level so lets actually get into some specifics, I'm going to present some data structure classes and then explain the reasoning behind them and how they're intended to work. The implementation of game code to support these definitions is left as an exercise for the reader ;).

Ability Definition

[CreateAssetMenu(menuName = "Definitions/Ability")]
public class AbilityDefinition : ScriptableObject
{
  public float Duration;
  public float Cooldown;
  public Outcome[] Outcomes;
}

[System.Serializable]
public class Outcome
{
  public string Id;
  public Trigger Trigger;
  public Targeting Targeting;
  public GameAction[] Effects;
}

An ability in Seekers is essentially a timeline, a sequence of outcomes, which are comprised of a trigger, targeting information and effects. I like to think of it as each section answers a question we need to know in order to figure out which of our bits of self contained code needs to run and the context which it needs to execute.

  • Trigger - when does this happen?
  • Targeting - who is it happening to?
  • Effects - what happens?

Triggers

This is the most straight forward one; the most basic trigger is an execution time; that is after a certain amount time after the ability is triggered the specified effects should happen.

[System.Serializable]
public class Trigger
{
  public float ExecutionTime;
  public bool IsNormalizedTime;
}

Allowing the time to be specified as a normalized time or an absolute time gives the option to easily speed up or slow down the entire ability without having to re-enter the execution time of every outcome. It also allows easy lining up effects to points in animations, presuming you start playing an animation at the start of the ability (as it's easy to preview an animation in Unity, find the point you want something to happen, note the normalized execution time and put it in as execution time).

This could be extended to allow for specifications such as "1 second after a previous outcome starts", "immediately after a previous outcome finishes" or something more conditional such as "when the ability executor takes damage". I've not felt the need to add these options yet in Seekers but it bodes well for the desire for extensibility that you could add them without changing the concept.

Targeting

Generally things in games happen either directly to characters, or at a position relative to them, so we need to know who the effects are going to either happen to, or happen relative to. So for example, you might want to damage everyone within 5 units of a point 3 units in front of the character executing the ability, you might want to spawn a VFX object 3 units in front of the ability executor, or you might want to target the targets of a previous outcome.

public enum TargetingType
{
  Self,
  TargetFilter,
  TargetsOfPreviousOutcome,
}

[System.Serializable]
public class Targeting
{
  public TargetingType Type;
  public TargetFilter TargetFilter;
  public string PreviousOutcomeId;
}

This structure gives three ways to pick targets; 'self' a quick short hand for the character executing the ability, a number of targets as specified by a 'target filter', and by using the targets of a previous outcome. The last one saving us from recalculating the same target filter multiple times, and also allowing us a way to target characters based on the game state at previous point during the ability, rather than only being able to target characters based on where they are when an outcome executes.

Where a target filter is a way of specifying a set of characters by taking a set of potential targets and then applying a series of filters in turn to narrow down your the set until you're left with who you want to target. Look at this gist if you'd like to see an example of what a data class for that might look like.

Game Actions

Now this is where it gets fun! Having done the boring but necessary bits of specifying when something should happen and who it should happen to, now we can start making all the different things you might want to do!

public class GameAction : ScriptableObject
{
	public virtual void Invoke(List targets) { }
}

[CreateAssetMenu(menuName = "Definitions/Game Action/Inflict Damage")]
public class InflictDamageAction : GameAction
{
	public enum DamageType
	{
		Physical,
		Fire,
		Water,
		Earth,
		Wind,
		Electric,
		Heart,
	}

	public int InflictedDamage;
	public DamageType InflictedDamageType;
}

[CreateAssetMenu(menuName = "Definitions/Game Action/Animation")]
public class AnimationAction : GameAction
{
	public string AnimationName;
	public float Speed = 1f;
}

[CreateAssetMenu(menuName = "Definitions/Game Action/Spawn VFX")]
public class SpawnVFXAction : GameAction
{
	public GameObject Prefab;
	public Vector3 Rotation;
	public Vector3 OffsetFromTarget;
	public float Scale = 1f;
	public bool ParentToTarget;
	public string NamedTransformParent;
}

[CreateAssetMenu(menuName = "Definitions/Game Action/Camera Shake")]
public class CameraShakeAction : GameAction
{
	public enum ShakeType
	{
		Offset,
		Rotation,
	}

	public ShakeType Type = ShakeType.Offset;
	public float Frequency = 24;
	public AnimationCurve DampingCurve = AnimationCurve.EaseInOut(0, 1, 1, 0);
	public Vector3 Direction = Vector3.down;
	public float Duration = 3f;
}

I've used polymorphism inheriting from ScriptableObject to specify the different types of game action so we can just specify the information pertinent to the type of action. This does mean that for each action we'll need to create a new asset in the project and link them in the inspector. Compare this to triggers and targeting which contained all the information for all options and were displayed inline. Let's see how that looks in Editor.

Inspector view of an ability definition

Inspector view of an ability definition

We have to use scriptable objects if we want to use polymorphism as the Unity's default display can not cope with polymorphism directly inline and we need to use the asset referencing as a workaround. This is fine for complex and reused data such game actions, but we probably don't want an asset in the project for every conceivable time we might trigger some effects, so using serializable classes for those makes more sense.

Couple of things worth noting: whilst I've displayed the code as if it were in a single file, each ScriptableObject class needs to be has its own file whose name matches the class name in order for duplicating assets in the editor to work. Additionally you'll notice that the base class has an Invoke method but the child classes don't, well they would need define an override in a full implementation, but the focus of this post is the configuration classes!

We get additional benefits to abstracting GameAction in the way that we have in that as well as exponentially increasing the power of the system each time we add a new action type, can easily start creating more generic action types.

[CreateAssetMenu(menuName = "Definitions/Game Action/Compound Action")]
public class CompoundAction : GameAction
{
	public GameAction[] Actions;
}

[CreateAssetMenu(menuName = "Definitions/Game Action/Timeline")]
public class TimelineAction : GameAction
{
	public float Duration;
	public Outcome[] Outcomes;
}

Summary

Phew that was quite a lot, hopefully you get the idea of how you could use scriptable objects to create the basis for an extensible skill system for an ARPG! I've actually glossed over a fair few of extra things which I've implemented in Seekers to keep this core understandable. An example being Seekers also has the concept of 'movement curve definitions', each ability can optionally specify one so that characters can rush their towards targets and these definitions can also be used to specify knockback on abilities, and movement of projectiles.

The next Seekers post will probably focus on how I've been working on feeding player input into abilities, the system detailed in this post is good responding to button presses, but it doesn't allow for aiming of abilities independent of character movement. There's also a design challenge on how to facilitate aimable abilities across multiple platforms. I also apologise if this post was a little dry, I promise more gifs in the next post.

Posted by Delph

Seekers - Animation System

The first thing I do in any new game is get the player character moving and animating. In Unity there are three main options for animation:

  • Using the legacy animation component

    Simple and direct playable of individual animations, but precludes humanoid animation retargetting.

  • Using the animator component using animation controllers

    Fine explicit control of animations and transitions built using in editor visual state machine.

  • Using the animator component with a playble graph

    Allows you to build state machines programmically at run time, which can be changed on the fly.

For an ARPG style game like Seekers, the primary use case is very simple direct "play this clip". I don't particularly enjoy using the editor to build visual state machines, and needing a different number of clips per character would mean either many making state machines or one very large machine, however I may wish to use humanoid animation sets across multiple characters. So I decided to use a playable graph with the animator component to try to recreate the simplicity of use of the legacy animation component.

Initial Setup

The unity manual's playable examples page provides all the information to needed to set up a playables graph to blend between animation clips (scroll down to the "Blending an AnimationClip and AnimatorController" example). It's relatively easy to expand this example to support an arbitrary number of animation clips.

using UnityEngine;
using UnityEngine.Animations;
using UnityEngine.Playables;

[RequireComponent(typeof(Animator))]
public class AnimationController : MonoBehaviour
{
	const string ANIMATION = "Animation";

	PlayableGraph _playableGraph;
	PlayableOutput _playableOutput;
	AnimationMixerPlayable _mixerPlayable;
	AnimationClipPlayable[] _clipPlayables;

	void OnDestroy()
	{
		_playableGraph.Destroy();
	}

	public void Init(AnimationClip[] clips)
	{
		_playableGraph = PlayableGraph.Create();
		_playableGraph.SetTimeUpdateMode(DirectorUpdateMode.GameTime);
		_playableOutput = AnimationPlayableOutput.Create(_playableGraph, ANIMATION, GetComponent());

		_mixerPlayable = AnimationMixerPlayable.Create(_playableGraph, clips.Length);

		_playableOutput.SetSourcePlayable(_mixerPlayable);

		_clipPlayables = new AnimationClipPlayable[clips.Length];
		for (int i = 0, l = _clipPlayables.Length; i < l; i++)
		{
			_clipPlayables[i] = AnimationClipPlayable.Create(_playableGraph, clips[i]);
			_playableGraph.Connect(_clipPlayables[i], 0, _mixerPlayable, i);
		}

		_playableGraph.Play();
	}

	public bool IsComplete(int index)
	{
		return _clipPlayables[index].GetTime() > _clipPlayables[index].GetAnimationClip().length;
	}

	public float GetNormalizedTime(int index)
	{
		return (float)_clipPlayables[index].GetTime() / _clipPlayables[index].GetAnimationClip().length;
	}

	public void SetTime(int index, float time)
	{
		_clipPlayables[index].SetTime(time);
	}

	public void SetWeight(int index, float weight)
	{
		_mixerPlayable.SetInputWeight(index, weight);
	}

	public void SetSpeed(int index, float speed)
	{
		_clipPlayables[index].SetSpeed(speed);
	}

	public void SetDuration(int index, float duration)
	{
		SetSpeed(index, _clipPlayables[index].GetAnimationClip().length / duration);
	}
}

It's worth noting this controller doesn't internally ensure that total weight of all clips adds up to one, that's left to the controlling code. Here's what that looks like when hooked up to a test script that swaps between an clip idle and another one of our array of clips.

Single Animation Playback

The animations in this set all start and finish near the idle pose, so this looks pretty good without even without any transition blending!

Movement

So far so straight forward, so onto the fun bit, movement animation. As Seekers is intended to be multi-platform I'm going with the classic ARPG style where you can only run forwards and change your facing direction. It's quite difficult to control independent facing and movement directions with a gamepad or touch screen! This simplifies our problem to playing a forward moving animations such that it looks appropriate for the speed at which a character is moving.

To start with we're going to need to know what speed of movement our animations correspond to. If you were authoring your own animations you'd hopefully already know, but as I'm using pre-bought assets (alas I'm not an animator) I need to determine this.

I found the best way to do this is by eye, with a couple of test scripts, one which can adjustably scroll a material, and another to adjust the time scale of playback. Then simply drop the model into a test scene, get the desired clip playing, and adjust the scrolling material under the character until it looks about right.

Determining Clip Travel Speed

Whilst we're gathering meta-data about our animations let's figure out the times in the animation that each foot hits the ground, so we have the option of playing some sound effects. The easiest way to do this is probably to select the clip in the project view and use the animation preview section in the bottom of the inspector window, dragging the animation position until the foot strikes the floor and noting the normalised playback time.

As I want movement speed to be easily tweak-able, and potentially changeable, I'm going to need some method of adjusting clips to match the speed a character is actually moving at. Two approaches occur to me; dynamically changing the playback speed of the current clip or blending between two clips of different movement speeds.

Approach 1: Adjust Playback Speed

Now adjusting playback speed itself isn't complex, pretty much just calling something like SetSpeed(movementClipIndex, currentSpeed / movementClipTravelSpeed) on the animation controller code above and if you've only got one clip it's pretty much just picking the idle clip or the movement clip if you have a non-zero speed. The interesting bit is if you want to use multiple movement clips with different speeds, e.g. walk, run, sprint.

The approach I took was to add even more meta-data for the movement clips, defining the ranges at which a clip applies and then picking the clip where current speed fell within the defined range, or the closest otherwise. I also made it so it would prioritise the current clip if it was still in the current clips speed range, paired with overlapping speed ranges this prevents slight variations in input causing the movement animation from flicking back and forth.

Motion: Scaling Playback Speed

There's a few observations to be made on this approach, having tried both with a pair of clips for walk and run, as well as using a single run movement animation:

  • Walking clips between 0 and the clip's travel speed look pretty good.
  • Running clips at very low speeds look pretty bad, more like 'slow motion' than natural movement.
  • Running clips played at higher speeds look more and more comical as speed increases.
  • The transitions between different movement clips are jarring.

The last point is made worse by the fact that in this set of animations the walk animation is more upright than both the idle and run clip so the character stands up straighter briefly when accelerating, which looks very unnatural.

Approach 2: Blending Movement Clips

If you're only using a single movement clip, then this is pretty straight forward, the code would look something like this:

float blendWeight = Mathf.Min(1.0f, speed / movementClipTravelSpeed);
SetWeight(movementClipIndex, blendWeight);
SetWeight(idleClipIndex, 1 - blendWeight);

As with the previous method the interesting bit is when you have multiple movement clips. If the animations being used were made with blending in mind; the movement clips will have the same number of run cycles, the same footfall times and be the same duration. All three of these things need to be true for you to be able to simply blend between the different movement clips based on current movement speed. Alas this is not the case with the animation sets I'm using, so we're also going to need to change the playback speed so they last for the same duration, and offset the playback so the footfalls are approximately at the same time.

Having decided the desired playback duration and armed with the normalised footfall times meta-data we can use something like the following to set the necessary offsets and playback speeds.

[SerializedField] MovementBlendSettings _moveBlendSettings; // Contains desired clip length and desired left footfall time
MovementBlendValues[] _movementBlendValues; // Store for calculated offsets and playback speeds

public void CalculateBlendValues()
{
  // Calculate playback speeds and offsets for clip blending
  _movementBlendValues = new MovementBlendValues[_moveClips.Length;]
  for (int i = 0, l = _moveClips.Length; i < l; i++)
  {
    _movementBlendValues[i] = new MovementBlendValues();
    int cycleCount = _moveClips[i].LeftFootfallNormalizedTimes.Length;
    _movementBlendValues[i].PlaybackSpeed = _moveClips[i].Clip.length / (cycleCount * _moveBlendSettings.DesiredClipLength);
    _movementBlendValues[i].Offset = _moveClips[i].LeftFootfallNormalizedTimes[0] - _moveBlendSettings.NormalizedLeftFootfallTime;
    if (_movementBlendValues[i].Offset < 0)
    {
      _movementBlendValues[i].Offset = 1 - _movementBlendValues[i].Offset;
    }
  }
}

Having calculated the necessary offsets and playback speeds we can then blend between clips as in the simple psuedocode at the start of this section, so long as we pick the correct clips to blend between and we adjust the travel speed for the new playback speed.

In this approach we only need to set the playback speed of the playables on initialisation, however once in idle (i.e. not moving) the clip playables current time should be set back to their calculated offsets so that as when the character sets off it always starts from the 'start' of the animation clip. This is because even when the weight of a playable is set to 0 in a mixer, it's execution time will still increase as the mixer plays, on the plus side this keeps all the offset-ed movement clips in sync whilst moving, even if not all have a non-zero blend weight at any given time.

Motion: Blending Clips

Observations on this approach:

  • The character seamlessly transitions between different speeds.
  • When using on one running movement clip, setting off looks far better than adjusted play back speeds.
  • Only allows you to 'match' movement speeds up-to the travel speed (adjusted for playback speed) of your fastest travelling movement clip.
  • Running clips continue to look natural even when movement speed is far higher than the clips travel speed.
  • The method of changing the play back speeds is only really suitable for slightly different clip durations per run cycle, where the difference in travel speed is caused primarily by stride length.

Further Improvements

As I am only going to be using idle and a running clip for my main characters (due to the clips in the animation set I have), it makes sense for me to go with the second approach to movement as described above. However if I were to want the comical overspeed run, or were to animate a character with suitable walk animations, I would probably want to create a hybrid approach, which would could use either approach when moving between no movement and the slowest clip (combined with more classical transition style blend in), and when dealing with overspeed from the fastest clip, but retaining the blending for transitions between clips.

You can look at the source code here. This is very much only first pass at an character animation controller, there's plenty more I'm planning to add and polish up, but it's enough to start prototyping with, and looks pretty good with the animation set I'm using.

Something sorely lacking from this controller is transition blends which is pretty much a requirement for animation sets where animations don't start and end to the same idle pose or if you need to transition mid-playback to another animation. My plan is to incorporate this by separating the movement clips into a separate sub-mixer on the graph and then I'll be able to write a relatively simple transition blender which will only need to transition channels on the source mixer of the graph between 0 and 1, whilst allowing me to continue to blend movement clips to match the character's movement speed.

The next Seekers systems post will probably be about the ability system, that's the configuration driven timeline that allows me to specify a large variety of character and enemy abilities without having to write bespoke code each time. I expect that post to be more conceptual and I'll be posting even less code as it involves a lot of code!

I hope you found this post interesting, I took a little longer to put together than I expected but was nice opportunity to review some code from early in the project. I'm going to go back to focusing on creating the minimum viable version of Seekers for a while now!

Posted by Delph

Seekers

Seekers is what I've decided to call my little, work in progress, ARPG / Roguelite game! The general idea being that each of the player characters is seeking something, that their motivation for adventure is different. Whilst don't expect to be putting much narrative into the game, I think it's helpful to have lore to help guide development whether or not you directly show it to the player.

As I alluded in my previous posts, this is intended as something of a practice project, trying to find something that falls into the "able to make", the "want to make" and the "want to play" categories (reference). My intended approach is to make the minimum possible that constitutes a game and then add features and improvements for as long as the project holds my interest and / or there is interest from others.

In order to try and generate said interest from others, to allow players to help guide development and to figure out if anyone wants to play this kind of game, I going to be making work in progress builds publically available as soon as I have that minimum viable game. As for what that minimum viable game entails, I'm currently thinking a single playable character and a single level that takes a few minutes to play through, ending in a boss battle.

As there has been some interest on twitter, I also plan to write blog posts outlining the major systems I've built that help me make this game as a solo developer. Thus far those are the animation system using the Playables API, the code driven timelines I use to create abilities for players and enemies, and the trigger / receiver system I use for 'scripting' events in the level. However as I tackle new features, say Boss AI or level creation I'm quite likely to make more!

As text only blog posts seem to be out of fashion, this is a gif of me testing the trigger / receiver system by spawning enemies when the player interacts with an object, and yes the main character interacts with things by hitting them.

Posted by Delph

Mobile ARPG

In the first half of this year in my spare time I was working on the pixel perfect renderer and the multiplayer top down shooter, as well as doing a little pixel art. At work I was lucky enough to be leading a small team prototyping a multiplayer mobile action RPG, in the style of non-stop knight (but with more permanence and boss fights amongst other changes). The original idea being that your character would move and perform basic attacks without prompting and you would trigger special attacks. We couldn't get publishers to bite, so we moved towards no automation more inline with classic ARPGs and mobile MOBAs (like Mobile Legends: Bang Bang), which did result in a much more compelling game but the publisher we were in talks with already had a similar game so declined to take up full production, so sadly the project was put on ice.

During this time I played a number of mobile ARPGs as research, in addition to the above HIT is an interesting example that is halfway between idle and full control, which once I understood the meta game I spent a fair bit of time on. Before this point I'd always avoided trying to make an ARPG myself because I found Torchlight and Diablo 3 on PC quite frustrating, the click to move and click to attack often causing lots of unintentional behaviour. I had assumed that in order for me happy when creating a ARPG I would need to make a full 8-way movement with independent aiming - Battlerite and Wizards of Legend are excellent examples of this done well. With no animation experience myself - and not knowing any animators - doing in this in 3D could be prohibitively expensive in either creation time or purchases costs of animation. If I was going to take that on, why not create a top down shooter or first person shooter which are genres I enjoy more? However having played these mobile games during my research and more recently Diablo 3 on switch, I've been convinced that making aiming and attack independent inputs was enough to radically improve the gameplay experience without needing independent facing and movement directions.

Having not yet had my fill of creating ARPGs, I decided I would make an ARPG myself. As part of the prototyping process at work I was able to work with some assets that I had been eyeing for a while on the asset store, and from that I knew they would make a good basis for the game. I figured that using these asset sets as a purposeful constraint on the game design would help keep the scope manageable. So I did a little shopping at the BitGem store.

Barbarian Girl Sonja Skeleton Swarm

I then set about reimplementing from scratch a variation of the data driven skill system we created at work. That system was engineered to allow designers to use a content management system to create and tweak character skills / abilities independently from art and animation assets, this allowed for rapid prototyping and iteration and it also had the additional benefit of resulting in a smaller (if somewhat harder to debug) code base. It's an approach with quite a lot of benefits and whilst I don't have my own personal CMS it's relatively easy to use Scriptable Objects in Unity to create a similar configuration system.

Working on this - as yet untitled - mobile ARPG has been my main personal project in the second half of the year (along with the recode of this website). It's been fun to focus on making the most of assets I've bought and creating tooling and systems which suit my needs as a solo developer. Focusing on visuals early and explicitly targeting mobile has also meant I've been able to look at systems in Unity I've not used before like the Playables API and light probes / lightmapping.

Mobile ARPG: Training Room

That concludes the "what I've been up to in 2018" posts, just in time for 2019 to start! I hope to make future posts on this project a bit more in-depth and focused on a particular aspect or feature. Now I just have to think of a name for this game...

Posted by Delph

Retrospective: Asteria Prototyping

There's a particular game design I've had in the back of my head for a few years now. A sci-fi cooperative shooter. Originally the idea was mash up of Left4Dead and Phantasy Star Online. Valve's cooperative shooter was the first time I'd seen a game so successfully engender team work, and was still a very immersive experience. The 'AI director' tech meant it took a lot longer before it was the norm to game the systems. I played PSO on the Dreamcast a few years earlier, and enjoyed the world, particularly the first few levels, the aesthetic and the space opera feel of the main story, even if the mechanics were a little clunky.

Sometime after playing L4D I found Alien Swarm , a free top down shooter from Valve with a strong 'Aliens' vibe, it seemed to incorporate similar swarming logic in it's enemies along with class specific mini-games. The fact it had classes and was a softer coop made it naturally closer to PSO, and yet still had the high skill ceiling / potential for mastery of the first person shooter which PSO had lacked. So the design shifted to being closer to a mash up between Alien Swarm and Phantasy Star Online.

Alien Swarm Phantasy Star Online

A quick aside, you can still play both Alien Swarm and Phantasy Star Online for free, and I would recommend doing so! Replaying Phantasy Star Online has been an interesting experience for me, the story and tropes employed in the side quests are quite dated and some are kind of problematic and whilst it suffers less than a lot of even modern shooters and MMOs, it does have some *ehem* interesting choices for some of the female characters costume designs. That said whilst the story hasn't aged well, I feel the mechanics and cadence of combat are far more compelling than I gave them credit for when I played as a teenager, that was probably the then Counter-Strike player thinking anything that wasn't twitch / reaction based wasn't really testing skill, although admittedly I still love twitch based games.

Having spent quite a lot of my spare time over the last few years making game prototypes, but not this one as I felt it was too ambitious to achieve, I decided it was time to try anyway, perhaps part of the issue with my difficultly finishing projects was I was focusing too much on what I thought I could achieve versus what I actually wanted to make.

After creating a placeholder player that turned to face your cursor with slight delay out of a pair of cubes (the top cube looking at the aim point gives a surprising amount of character to it), I followed and adapted a FPS UNet tutorial to set up basic networking - client hosted with an authoritative host - which was surprisingly easy to get working. The tutorial did seem to want you to write almost every script as a network aware script but I ended up moving towards having a single script per character to relay messages and commands with individual components focusing on their own responsibility.

Cube Player Prototypes Rigid Body Experiment

I created a test level out of Unity's prototyping geometry, and a basic system for spawning enemies who seek the nearest player and deal damage in a cone in front of them. Once all enemies in an area were killed the doors would unlock and the player could continue. At this point I discovered that UNet's high level API does prediction and interpolation for you on Network Transforms that have a character controller component on but not for other Networked Transforms, it turns out you can avoid having to write your own by attaching that component on non-host clients.

Level Prototyping

I also proved out a few different weapon types (burst fire, mini-guns, grenade launchers) and bought a humanoid 8-way run with independent aim animation set and got that working with characters who's movement is not controlled by root movement of the animations. Using the animation root movement may look good, but generally feels unresponsive which isn't really what you want in a top down shooter.

8 way run and independent aim

This all sounds like solid progress right? Well quite, but then it got to the point of needing to take another pass on the enemies, which were not intended to be classically humanoid. At least point quite how much I had been relying on what I already knew how to do, or could buy off the asset store became clear. So I stopped to take stock, it had been a few months of spare time at this point. I was quite happy with the prototype as a prototype, it was quite clear the progress was going to slow down significantly as I would have to either skill up or bring in other people to help and if I was serious about trying to make this it was going to be years.

As you can guess from this being a retrospective, I wasn't quite ready to commit to that and was worried that skilling up on this project that I cared about so much would compromise it's quality, and previous attempts to form a team to make a game had resulted in drama and bad feelings and being in a lead role for client development in work at the time I didn't feel like doing something that looked even more like day job during my spare time as well.

This lead me onto my current Unity project, which I'll talk a bit more about in a future post - but it embraces reliance on pre-built assets, and should allow me to practice the polish (and you know finishing a game) aspects of game development, as well as getting to make features I've not created before but would be relevant to this game as well, say like character customisation.

I was making all this in Unity 5.6 as I had a perpetual licence, and the new versions of Unity hadn't provided any killer features I felt would help the particular games I was trying to make. Over the last few months this has changed, particularly ProBuilder becoming integrated into the editor, improvements to the terrain system landing in 2018.3, and the decal system on the newly released FPS sample project are all very interesting and potentially useful to this type of project. The other major development is that UNet is being phased out, however the replacement has yet to be revealed and whilst the forum threads state it will still be possible to run p2p client hosted games it remains to be seen how straight forward this will be when the focus seems to be on providing infrastructure as a service.

I'm hoping to revisit this project in future, when the new networking technology from Unity becomes available, and when I've had the opportunity to practice some of the areas of develop I don't get to focus on as much in my day job. I hope to blog about that project I'm using as a practice soon.

Posted by Delph

Retrospective: Hestia

At the start of this year I played around with PICO-8 and there were lots of things I loved about it! The focus on being a fantasy console meant that retro aesthetics of the sound, music and graphics were nicely in sync. The image memory and code character limits forced you to focus on achiveable games. The BBS system was very cute, worked well and had some of the "view source" mentality for game dev that helped me get into web dev.

However there were a few things I didn't like; the small caps font in the integrated IDE I found extremely difficult to parse and switching to an external text editor broke immersive beauty of the program. Whilst the focus on character limits and low memory was encouraging for small scope games, it forced you to write very imperatively which I think is kind of bad practice and prefer writing in an event driven style, and this meant certain game types were extremely popular on the BBS (2D platformers I'm looking at you).

The final thing was, I came a bit late to this particular party. I've been told by early adopters that originally it was very "anyone can make a game and publish it" but later in the lifecycle the BBS was populated by exceptional works by highly practiced devs and demo scene style proof of concepts for raytracers and other tech, very impressive but also quite intimidating.

So inspired by PICO-8 and having previously tried making pixel perfect games in Unity and found it wanting, I spent most of my free time in January having a crack at making a similar set of tools as that provided by PICO-8 but with web tech instead. I was aware at the time this has been done before, but this was about learning and fun rather trying to create a competitor.

The idea for my implementation was a set of tooling in which you could pick - or provide - colour pallettes and input capabilities as part of the configuration. You could stick to the SNES style d-pad plus 2 buttons, or you could go full keyboard and mouse, atari style, or anything in between. Allowing you to choose the limitations you wish to work with. As I wasn't planning to actually simulate a console, I had no plans to artifically enforce any kind of image or code memory limitations.

I started with the renderer and discovered quite a few things:

  • The Canvas 2D context obeys image rendering properties, and the CSS attributes for these aren't consistent across browsers.
  • Almost all functions on the canvas 2D context are anti-aliased, even when you've set canvas image-rendering properties to pixelated or equilivent.
  • When you draw lines you need to use a half pixel offset to get sharp lines, or it'll blur the line across 2 pixels.
  • Browsers do not display their window content at a 1:1 ratio, 1px in CSS is not 1px on the screen. You have to use the non-standard but prevalant window.devicePixelRatio to adjust the size of the canvas to get a precise number of screen pixels per texel.

I successfully worked out a set of basic drawing methods, which could draw pixels at a specified integer scaling to screen pixels, by using a small subset of the canvas rendering functions. I then discovered that the exact pixels on canvas are accessible as a TypedArray you can directly manipulate, which this Mozilla Hacks article outlines nicely.

Faced with the fact that I should probably re-write the renderer using this method instead, I realised I was spending all my time on tooling and that there was still significant work to do! Knowing that there was a game I wanted to try to make that would have probably been better in 3D anyway, I decided to shelve the project and revisit it when I felt like doing more pixel art.

Now that I come to write this up - knowing that my next two projects were created using Unity3D - a pattern to the projects I attempt emerges. On some projects I write things from scratch, enjoying not being tied to a commercial package and focusing on learning. On others I take advantage of my experience in Unity to create things much more quickly and allowing me to put the majority of time into making a game rather than creating the tooling that I need to effectively make a game! I do think both approaches have merit and I enjoy doing both, and rather than cutting this out, I'm planning to revisit projects when I feel the desire to switch approach.

So to allow me to easily to resume work on this project, I have uploaded it as Hestia on GitHub. I'm sorry there's no pretty pictures to go with this post, but this one didn't get past the coding stage!

Posted by Delph

A Wild Blog Appeared

After keeping the same design and broadly the same content for years, I've updated the site with a cleaner design and refocused the content. This includes adding this blog system, which should allow me to post about investigations, work in progress projects, and occasionally just stuff I'd like to write about!

It also includes putting up a page for Fury, the WebGL renderer I wrote a few years back, and sending some time updating the voxel terrain demo with improved performance and the ability to generate larger environments asynchronously, as it's a fun thing to play with.

The pixel art that I posted on twitter for Pixel Dailies earlier in the year are viewable on the pixels page, and I plan to post further pixel art pieces I make there too. Hopefully I'll get better. The new format of the site will probably better represent the fact I'm a bit of a dabbler.

I also took a little time to clean up the tutorials section, both updating a couple of the WebGL tutorials that had become out of date, and removing the Unity tutorials that were really glorified blog posts which were heavy on approach but light on detail.

I'm hoping to either polish up and post, or write a retrospective on, a few of the projects I worked on over the last year, as I've worked on quite a lot but not talked about or posted any of it! After that I plan to have a crack at actually completing a personal game, crazy I know.

Posted by Delph