SotA Anthology 2015-16 | Page 94
SotA Anthology 2015-16
Whilst
Whalen
(2004)
suggests that sound in
games is either “to expand
the concept of a game’s
fictional world” or “to draw
the player forward through
the sequence of gameplay”,
the use of procedural audio
inhabits a space between
the two. It works alongside
visual cues and player input
to draw the player through
the game. For example,
the generation of swoosh
sounds in Heavenly Sword
stops repetitiveness and
pushes the battles forwards,
with the audio expanding
the game world as it gives
a tangible sense of reality
in a virtual world, helping
the player to achieve the
suspension of disbelief
needed to reach a secondary
level of immersion.
Taylor (2002) argues that
perspective and point of
view are what ultimately
lead to immersion, and that
‘unified monocular vision’ is
often favoured as a way of
creating immersive game
play. In the same way that
visual perspective leads a
player to focus on a specific
point, the way sound and
music is processed also
draws the player’s attention
to a game object or event.
Adding reverb and various
low-pass or high-pass filters
dramatically changes the
sense of presence of a
sound, drawing the ear to
whichever sound is most
present or sounds closest.
This is often seen in the
processing of dialogue,
for example in Ubisoft’s
Assassin’s Creed II the
protagonist Ezio sits next
to a non-player character,
Leonardo da Vinci, and
his dialogue is processed
so that it is not only the
loudest but feels closest to
the player’s ear. The music
sits much further back in the
mix, drawing your attention
to what is being said in order
to progress the narrative.
However, the combination
of differently processed
sounds in one scene
adds dimensionality. For
example, in the opening
scene of Ubisoft’s Far Cry 4,
the sound and music works
in a series of layers: the
opening image of mountains
is accompanied by a soft
pad with heavy reverb and a
solo pan-flute note which is
both higher in velocity and
in dry signal than the pad,
making it sound closer to
the protagonist, which gives
the player a better sense of
space between themselves
and the distant mountains.
The softness is then cut by
the sound of a car going past
(see below), panning from
left to right in line with the
image, with very little reverb
and an applied doppler
effect. The three different
ways these sounds are
processed creates a sense
of three-dimensional space
for actions to unfold on a
two-dimensional
screen.
It has created the far, the
near and the in-between, as
well as opening the world
up outside of the edges of
the screen. We hear the
pitch and velocity of the car
decrease as it gets further
away from us out of sight,
rather than the sound just
disappearing when the car
disappears out of view.
This is something sound can
do that image cannot; we
can see only as far as our
field of view can let us but we
can hear things that happen
outside of that visual space.
Gaver (1993) describes
this as ‘everyday listening’,
suggesting that we do not
hear sounds as their own
entity but we hear them as
experiences; the way the
original sound source has
been manipulated gives
us a full picture of not
only the object, but also
its proximity and velocity
as well as its surrounding
environment.
In
video
games the field of view is
more limited than in the real
world, so the manipulation
of source sounds is even
more important to creating
the depth that the graphics
Above: Far Cry 4. ©Ubisoft. View clip: http://bit.ly/29Ibp2e