A lot of ARPGs have abilities which simply happen when you press a button, then depending on the way you're facing they execute an effect. These types of ability were easy enough to set up using the scriptable object system I described in my last post on Seekers, by using the character's facing direction in the implementation of the game code implied by the definition classes.

These systems work fine for a single joystick and buttons - whether it's a virtual pad on a touch screen, a physical gamepad or with the controls mapped to the keyboard - but it doesn't really work if you're using a mouse and keyboard. Users expect your character to aim at your cursor position, regardless of which way your character is facing. Part of the challenge for Seekers is for it to be a multiplatform game, and hence supporting as many input methods as possible, this makes supporting aiming independent of facing direction something worth investigating.

The Existing System

Let's start by looking at how the system worked before abilities were manually aimable. Abilities definitions were given an "Initial Targets" Target Filter (example code). For attacking abilities, when ability button was pressed, the target filter would be run at the characters position using the characters facing direction as forward, the average position of the resulting targets would then be used to determine the direction of the ability then as part of the ability the character would be set to face that direction and execute the specifics of the attack.

You can see that this system would allow for varying degrees of auto-aim, the most extreme being a full circle, i.e. attacking anything in range (prioritising closest angular to facing direction), but also by using an arc target filter it would allow you to need to be roughly facing your target.

Aim Assist

Fairly Extreme Aim Assist

For Seekers I've generally used ranges over arcs because missing your target in top down games is very frustrating, but I can see that you could balance the game in a different way and in other camera modes you wouldn't want to turn 180 degrees to attack something behind you!

This system also allows you to make abilities where you jump to point in front of a single target or into the middle of a group of targets. As long as definition for the movement of the ability allows you to specify a desired separation from your target(s). As well as allowing for attacks which deal damage around a specific location.

Leap Ability

A leap ability - targeting a specific location based on enemy positions

For abilities that are not attacks or do not aim at a specific enemy - e.g. a dodge or a line attack - abilities could be configured with no "Initial Targets" target filter, any ability movement could use a set distance, and when you pressed the ability button the effect would use the character's forward direction.

Dodge Ability

A dodge ability - moving in a specific direction

Honestly I feel like this is quite a good system in and of itself, and if multiplatform wasn't part of the challenge I would be tempted to just use this, but we decided to try to attack manual aiming, so lets categorise these ability types.

Types of Ability

As you can see from the existing supported abilities, we need different amounts of information for different types of ability. Not all abilities can be aimed, and of those that can some only require a direction, whereas other require a target position (or a vector). With the existing system the target filter takes care of all of this, but if we're going to allow for manual aiming we're going to need to be explicit so I added an enum to my ability definition.

public enum AbilityTargetingType
{
    None = -1,
    Range,
    Arc,
    Directional,
    Positional,
}
  • Range covers the abilities that can not be aimed, corresponds to attacks that do not care about facing direction (e.g. melee nearest enemy, or deal damage to surrounding enemies).
  • Arc covers the abilities which will effect enemies in front of them based on arc (e.g. a melee attack with degree of aim assist or a cone attack).
  • Directional covers the abilities which move you or effect a direction with a set distance (e.g. a dodge or a line attack).
  • Positional covers the abilities which move you or effect a specific position (e.g. jump to targets or deal damage at location).

By marking up our ability definition with this targeting type, we will then know what type of information we need to pass to the ability execution code and well as allow us to know what type of aim feedback to show the player.

Extending the System

The system we have provides a degree of aim-assist which is desirable, so we don't want to scrap it but instead augment it. The easiest way to do this is to add a concept of Intention Direction to characters which can be used with the existing target filters instead of facing direction when needed. This value would not be normalized so it can be used as a vector to get target positions as well as directions.

Whilst using mouse and keyboard you always have a position and direction to aim at implied by your cursor, other input systems may be capable of providing aim information but may not always be doing so. For example, with a touch screen it's possible to turn any button into a virtual pad on press which can then be aimed, but if you just tap the button you probably want it to work as it used to.

When using input devices with this property if you're not aiming - and in systems which can't independently aim at all - the input system can set Intention Direction to your movement input direction instead. This makes intuitive sense and results in directional abilities working as they should, however positional abilities would end up moving to / targeting an arbitrary point in front of the character, which isn't really want we want.

To prevent this, we will also add an Is Aiming concept to the character passed through from the input system, which if false means the positional ability can use the target filter it already had to calculate the position for the ability instead.

Finally we need to move the execution of the ability to when the ability button is released, this is because other input systems might need you to press the ability button in order to provide the ability aim (e.g. touch screen), and we'll want to provide player feedback about the ability you're aiming with all input systems.

Gathering the Input - Virtual Pads

As mentioned earlier, for touch screens you can simply treat buttons as if they were virtual joysticks. Which is to say: when a touch input is received, cast against the UI and see if it was over the button, if it is convert the visual of the button to that of a virtual joystick and then whilst the touch continues measure the relative position of the touch.

The information this generates is in screen space and to use it, it must first be normalised against some specified maximum radius. It's worth using a larger maximum radius for normalising than the visual radius of ability button's virtual pad, because players will often drag outside the visual radius and we want to support that. Additionally our visual radius might be constrained order to prevent overlapping, where as the position the player drags to is not.

As well as using a larger radius to normalise, because players rarely want to aim right next to themselves you can also run the normalised value through an animation curve so that it requires less movement to "go through" the space near the player and you have more screen area to use for aiming at the limits of your range.

Sensitivity Curve

Using animation curves to modify virtual joystick sensitivity

Having normalised against screen space our input system will need to convert this direction to world space, which is a relatively simple transform using the camera that'll look something like this:

Quaternion.Euler(0, playerCamera.transform.rotation.eulerAngles.y, 0)
    * new Vector3(direction.x, 0, direction.y);

This will result in a vector on the XZ plane, it could be that there's a difference in height so the final position will need to have some validation applied to it if it's being used for movement, however I do this in the ability execution code and will cover that later.

Gathering the Input - Cursor Aim

The logic needed to aim using the mouse is more straightforward. As long as your level geometry has colliders attached you can simply ray cast through the camera using a layer mask which only includes the ground colliders of your level geometry.

Ray ray = playerCamera.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, out RaycastHit raycastHit, float.MaxValue, aimLayerMask))
{
    var worldAim = raycastHit.point - playerCameraOrigin.position;
    this.worldAimDirection = new Vector2(worldAim.x, worldAim.z);
}

If there is no result we leave the aim position as the last valid cast position, and for consistency with the virtual pad implementation we set the y component to zero so we can use the same position validation logic regardless of input type.

In my first pass I cast to a XZ plane at the character height to avoid needing to have my level geometry have colliders, but this resulted in an inaccurate aim position on floors which were higher or lower. The need for colliders on my level geometry was bound to happen eventually and having marked all my geometry up I was then able to make projectiles collide with the level too, so in retrospect I'm not sure why I ever tried to avoid it!

Player Feedback

So we've figured out how to get an intention direction, and our input systems are providing some feedback by converting buttons to virtual joysticks on touch screens and via the cursor itself when using a mouse. However the cursor position doesn't account for the maximum range of abilities and the virtual pad only shows in the UI, really we want to show the effect in world if we want maximum clarity.

If the game was entirely on a 2D plane we could just render a quad slightly above the ground, either pointing the intention direction for directional abilities or at the target position for positional abilities. Alas I decided to have different heights in my game, so we turn to a very old Unity feature Projectors. As the name implies these project a texture onto the world geometry using a particular type of shader, rather than using alpha transparency to determine transparency one needs to provide a greyscale cookie texture, where the brightness of the texture determines opacity.

Thankfully you can still rotate and position these as you would a quad, so it's simply a case of creating a few suitable cookie textures, then writing a MonoBehaviour script to fade them in and out as you press your abilities buttons and appropriately rotate them for directional abilities or position them for positional abilities. They need to set to be orthographic and far enough above the target point so that they'll be above any geometry we might be targeting, remember we made sure earlier that the target point is on the same XZ plane as the character regardless of input type.

Aimable Dodge Ability

The updated dodge ability with directional aiming

Something you'll find if using projectors is that they projects equally onto surfaces regardless of the surface angle to the projector, which in this case results in highly visible stretched lines across geometry that's vertical, to prevent this we can apply a limitation based on angle in the shader, here a gist.

Aimable Leap Ability

The updated leap ability with positional aiming, between floors of different heights

So now we can see in world either the direction or position our abilities are aiming. A free bonus we got with this is we can add a range preview to abilities which do not require a direction!

Position Validation

We've got a consistent aim direction from different input systems, we've decided how we're going to pass it into our abilities, and we've added visual feedback to the player. We've got one final thing to do, as the calculated intention direction is on the player XZ plane if it's being used to move the player to a point, we need to make sure its a valid point in the level. It's possible to aim at points entirely outside the level with some input methods and even when aiming inside the level the target position might not perfectly match up to navigable areas.

To fix this we need to check the point against the nav mesh, and if it's not valid calculate a fallback point, the code to do this is actually pretty straightforward.

if (!NavMesh.SamplePosition(targetPosition, out NavMeshHit navMeshHit, 0.5f, NavMesh.AllAreas))
{
    if (Physics.Raycast(
            targetPosition + 10 * Vector3.up,
            Vector3.up,
            out RaycastHit raycastHit,
            20f,
            levelCastLayerMask)
        && NavMesh.SamplePosition(
            raycastHit.point,
            out navMeshHit,
            1f,
            NavMesh.AllAreas))
        {
            targetPosition = navMeshHit.position;
        }
	else
	{
            if (NavMesh.Raycast(
                self.Transform.position,
                targetPosition,
                out navMeshHit,
                NavMesh.AllAreas))
            {
                targetPosition = navMeshHit.position;
            }
            else
            {
                Debug.LogWarning("Unable to find point on nav mesh to move to");
            }
        }
    }
}

We start off by checking if the target position is on or near the nav mesh, if it's not we raycast downwards from above the target point to a point below it and if that hits the level we check for valid nearby navmesh positions to use. If the raycast doesn't hit anything or there is no nearby nav mesh points, then we cast along the navmesh from the character towards the target position and use that result instead!

Adjusting Individual Abilities for Input Methods

We've done our best to make a system works consistently across different input systems, however sometimes you just have to make it act differently depending on the current system's capabilities. We can do this by making certain ability settings conditional based on input method using the scriptable object ability system.

One example is that when you have the capability of manually aiming you should be able to execute the leap ability even if there's no targets to attack, however when the input system doesn't have the capability to aim if you use it with no targets you would just jump forward a set distance, so really it's probably better require targets to be able to use the ability for those input systems. We can create a bool provider scriptable object which returns a value based on the current input method, which can also be stored as a scriptable object so it can be easily referenced.

Conditional Targeting

Making require targets conditional on input device (using an Expandable attribute).

Another example is when using a virtual touch pad it makes sense to dodge in the direction you drag, however via play-testing I discovered when using mouse aim it actually feels really wrong and you expect to dodge in your facing direction! So we also need to update our target filter to be able to pick their forward direction based on a bool provider too.

Final Thoughts

Well that was quite a lot, turns out supporting independent aiming with multiple different input methods is quite complex! I've not even covered some of the related topics, that you need to create different HUDs for touch screen compared to all the other input schemes and adjusting your menu navigation for different input schemes. This set of features has been bubbling away along side others for several months, it's nice to get it to a point I'm relatively happy with and until it came time to explain it I forgotten quite how many nuances there were!

An advantage of this holistic approach, building the support for multiple input systems into the gameplay systems themselves, is you can support all implemented input methods in one build and switch between them on demand, or even support multiple at once.

To see how this can be useful, consider a Microsoft Surface device has the capabilities to use any of the input methods we've talked about and on a web build someone could use any web capable device with a decent browser to play your game. Throwing a quick input detection script into the main menu to select your gameplay control scheme, as well as change the available menu navigation methods, lets us take advantage of this very property.

Comparing the versions of the game with independent aiming and without was interesting, it certainly improves the feeling of combat with a mouse and keyboard, but I'm not convinced it improves the combat on a touch screen. There is a certain joy to be found in relying purely on position and timing when you don't have a high fidelity input device to facilitate aiming, or perhaps I just need more practice at my own game.