I worked on different indie projects in the past years. Each of them used different game engines to get things done. I’m a hands-on guy that likes to control game audio as directly as possible. And I like taking part in the development process, implementing sounds myself rather than to sending WAV files to developers and wait for their implementation.
I briefly looked at UDK before our team decided to go with Unity 3D. I worked a lot in Unity since then, especially for our Son of Nor project. Another project I worked on used the Hero Engine, which started endorsing FMOD at that time as their primary (and only) sound tool.
In this multi-part series I want to share the experience I made with these tools along some personal thoughts. It’s a comparison from a personal journey based on the time I worked with these tools (sometimes for hours, sometimes for a week, and sometimes for years) and not a thorough scientific test.
This first part is dedicated to lay the foundations, to state what I am looking for in a good game development environment, and why it is important to me. The following parts will then cover my experience with UDK, Unity 3D, Unity with the AudioToolkit plugin, and Unity with the Fabric plugin. Let’s go!
My Initial Position
Before I dive into the tools, I’d like to state what I’m personally looking for in a game environment, being a sound guy.
What I Want
- Be independent: I would like to implement my sounds on my own. Attach sounds to game events with little to no coding. This allows me to work independently and not having to ask developers to do these things for me. I want to have a rapid turnover (see next item).
- Immediate feedback: It’s very important playing the game to understand how a sound comes across. I spent hours developing a sound, only to realise that it doesn’t work well in the game’s context. Maybe it was too repetitive, too unspectacular or simply didn’t sound like the thing it should have sounded like! You can’t develop sounds in an ivory tower (your DAW or audio editor) and hand them off to a coder and be done with it. The only way to get it right is to play the game and listen if your sounds work. I want to quickly import sounds and judge them in-game. Then tweak settings or go back to the DAW to work on the sound more. This back and forth between DAW and game engine has to be fluent, natural and quick! Without having to compile a build of the game, or worse, ask a developer to do that for me.
- Random sample play: Repetitive sounds like footsteps or shots must not come from a single sample or they’ll sound like a machine gun. The repetition is immediately apparent. The engine should have the means to trigger one event and then choose a random sound from a group of similar sounds. Trigger one footstep sound of a group of 5 footstep sounds for example to break up the obvious repetition. This is such a common function and should be built-in.
- Parameter randomisation: To break up repetition further, an engine should be able to randomise playback parameters like pitch or volume. Each time a sample is triggered, the pitch and volume are slightly altered. This makes even one sample always sound a little different from the last playback.
- Further control over sound: It’s super helpful if I can define a sound to fade out fast or slow. Or define where a sound should start playing back and where to stop. I could, of course, do that outside the game engine. But often I can make 5 different sounds out of one file by slightly varying these parameters. Thus, saving space and RAM, instead of having 5 separate files. Some control parameters are therefore much appreciated.
- Streamlined user interface: There’s nothing I hate more than a bad user interface. One where it takes 5 clicks and 3 sub-menus to do a common and repetitive task. The things a software was designed for should not be complicated. Having said that, importing sounds, quickly creating events, naming them, setting playback parameters in multiple files at the same time should be blazingly fast to do and require almost no clicks.
- Terminology for sound people: We sound guys work with sound software that uses terms lice decibels, compression ratio, pre-delay, delay time. A game engine should reflect this terminology and show parameters in the correct unit.
- Sound organisation features: Projects grow and change. It’s important to be able to organise and re-organise your sounds, just like developers do their round of re-factoring from time to time. For example, renaming, moving, grouping sounds into folders or categories without the game breaking and you having to go through hundreds of assets to re-link the sounds to their events should be a no-brainer. A good engine should keep track of all those operations and make it possible to re-factor your sound structure.
- Overview: It would be great if there was some list in the game engine that showed what asset is galling what sound event and what sound gets triggered. A master view to keep the overview. It happens so quickly that you forget to add a sound somewhere or you still have a sound event attached that’s long deprecated. Such a master overview would help tremendously.
- Documentation: In a complex product sometimes the only thing that helps is documentation. What does this strangely-named parameter do exactly? It’s what saves you hours of testing and roaming forums. Documentation better be simple and clear.
OK, let’s have a look at the tools I worked with to date and see what functionality they provide.