Mixing Between Effects With AVFoundation

AVFoundation on iOS

At WWDC2014 Apple introduced the Audio Engine for iOS. Part of the AVFoundation, Audio Engine allows a programmer to easily create quite complex audio processing pipelines for use in applications and games. In this post I will show how to mix between two different effects using the Audio Engine.

The project

Here is a video of the final product in action. Sorry about the terrible audio quality!

Getting Started

Open XCode and choose File -> New -> Project

Choose iOS Application -> Single View Application.

Enter a Product name (I used Audio Mixer Demo)

Select swift as the language.

In deployment info uncheck Landscape Left and Landscape Right

Switch to Main.storyboard in the project navigator

Adding the controls

Add a button to the storyboard, rename it to Play. Use ctrl drag up from the button to center horizontally and ctrl drag up a second time to add a constraint between the button and the top of the screen

Add a slider to the storyboard, crtl drag up to center it horizontally and ctrl drag up from the slider to the button to add a vertical spacing constraint between the two.

Ctrl drag to the left of the slider to add a constraint between the slider and the left of the storyboard. Use the size inspector to set the size of the constraint to about 20.

Add a second button to the storyboard, rename it to Pause. Use ctrl drag up from the button to center it horizontally and ctrl drag up from the button to the slider to fix the vertical spacing.

Open the assistant editor pane and ctrl drag from the play button to the ViewController source code to create an outlet. Name it playButton.

Ctrl click from the pause button to the ViewController to create a second outlet. Name it pauseButton.

Ctrl click from the playButton to the ViewController but this time create an Action. Set the type to UIButton and name it playButtonTapped

Do the same for the pauseButton. Name the new Action pauseButtonTapped.

Just to make sure things are all wired up correctly add debug messages inside the two new actions using println("some message"). Run the application using the play button and tap the buttons to make sure your debug messages print out. If they aren't showing up check the connections inspector for the two buttons to see what the problem is.

Once the buttons are connected up to the actions correctly use the attributes inspector for the pause button to set its hidden attribute to true

Next add the following function to the ViewController

func togglePlayPauseHidden() {
	pauseButton.hidden = !pauseButton.hidden
	playButton.hidden = !playButton.hidden

Make calls to it from both your playButtonTapped action and your pauseButtonTapped action.

Run your app again. This time when you tap the play button it should disappear and the pause button should appear.

Finally ctrl drag from your slider to your ViewController to add an Action. Change the type to UISlider and name it sliderChanged.
Add a debug message using println to the sliderChanged action, run the app, move the slider left and right and ensure that your message is appearing in the debugger.

At this point your ViewController's code should look something like this:

import UIKit

class ViewController: UIViewController {

    @IBOutlet weak var playButton: UIButton!
    @IBOutlet weak var pauseButton: UIButton!
    override func viewDidLoad() {
        // Do any additional setup after loading the view, typically from a nib.

    override func didReceiveMemoryWarning() {
        // Dispose of any resources that can be recreated.
    func togglePlayPauseHidden() {
        pauseButton.hidden = !pauseButton.hidden
        playButton.hidden = !playButton.hidden
    @IBAction func playButtonTapped(sender: UIButton) {

    @IBAction func pauseButtonTapped(sender: UIButton) {
    @IBAction func sliderChanged(sender: UISlider) {

Preparing the Audio Pipeline

AVFoundation Routing Diagram

Rough diagram of the application's audio routing

AVFoundation provides you with the AVAudioEngine, within which a number of sub components can be wired together to easily create quite complex audio processing pipelines.

Sound playback is provided by the AVAudioPlayerNode and since each of those only has a single output you are going to use two nodes, each sharing the same sound buffer to allow you to play the same sound through two separate effects. The AVAudioEngine provides you with a default mixer node (an AVAudioMixerNode), which is automatically connected to the sound output on your iOS device. Multiple AVAudioNodes can be connected to the mixers inputs and it will automatically blend the incoming sounds together.

The first step to playing your own sound is to choose one to play.
I chose this one from Freesound.org, fistly because the woman's voice is nice and clear which means it will be easy to here the way it is changed by the effects and secondly because it is available as a .wav which I know is one of several sound formats AVAudioEngine can play.

Make your chosen sound available to your app by dragging the file into the Supporting Files folder in XCode's file explorer panel. When prompted select the option to copy files if needed.

Make the AVFoundation classes available to your ViewController with an import statement:

import AVFoundation

and then add weak referenced member variables for an AVAudioEngine instance and two AVAudioPlayerNode instances to the ViewController class

var engine: AVAudioEngine!
var playerA: AVAudioPlayerNode!
var playerB: AVAudioPlayerNode!

It's OK to use weak references here because you are going to instantiate those classes in viewDidLoad before any parts of your application will reference them. Although you are going to use other componenst those three are the only ones you are going to need to keep references to.

In the viewDidLoad function of your ViewController instantiate the AVAudioEngine and the two AVAudioPlayerNodes and set the output volume of the two nodes to 0.5.

engine = AVAudioEngine()
playerA = AVAudioPlayerNode()
playerB = AVAudioPlayerNode()
playerA.volume = 0.5
playerB.volume = 0.5

Now you need to load your selected audio file into an AVAudioPCMBuffer.

// Tsk tsk, automatically unwrapping optionals is bad form, but
// this is just a demo so I'll let you off.
// Here you are getting the path for the sound file you added to your project and converting it into a file url.
let path = NSBundle.mainBundle().pathForResource("farah-faucet", ofType: "wav")!
let url = NSURL.fileURLWithPath(path)!

// Here you are creating an AVAudioFile from the sound file, 
// preparing a buffer of the correct format and length and loading 
// the file into the buffer.
var file = AVAudioFile(forReading: url, error: nil)
var buffer = AVAudioPCMBuffer(PCMFormat: file.processingFormat, frameCapacity: AVAudioFrameCount(file.length))
        file.readIntoBuffer(buffer, error: nil)

Now that you have the sound buffer available you can create the rest of the audio components

// This is a reverb with a cathedral preset. It's nice and ethereal
// You're also setting the wetDryMix which controls the mix between the effect and the 
// original sound.
var reverb = AVAudioUnitReverb()
reverb.wetDryMix = 50

// This is a distortion with a radio tower preset which works well for speech
// As distortion tends to be quite loud you're setting the wetDryMix to only 25
var distortion = AVAudioUnitDistortion()
distortion.wetDryMix = 25

All the audio components need to be added to the AVAudioEngine before they can be connected together

// Attach the four nodes to the audio engine

Now, as in the diagram above you are going to connect one player node to the reverb and then the reverb to the input of the mixer automatically created by the AVAudioEngine, and the other player node to the distortion and then the distortion to the input of the mixer node.

// Connect playerA to the reverb
engine.connect(playerA, to: reverb, format: buffer.format)

// Connect the reverb to the mixer
engine.connect(reverb, to: engine.mainMixerNode, format: buffer.format)

// Connect playerB to the distortion
engine.connect(playerB, to: distortion, format: buffer.format)

// Connect the distortion to the mixer
engine.connect(distortion, to: engine.mainMixerNode, format: buffer.format)

Finally, before you can play the audio you need to ask the player nodes to play the sound buffer on a loop, and turn on the audio engine

// Schedule playerA and playerB to play the buffer on a loop
playerA.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Loops, completionHandler: nil)
playerB.scheduleBuffer(buffer, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Loops, completionHandler: nil)

// Start the audio engine

Playing Sound

With the audio pipleline prepared all you need to do to play some sound is to ask the two AVAudioPlayerNodes to start playing when the play button is tapped. To do that simply add the following code to the playButtonTapped function.

@IBAction func playButtonTapped(sender: UIButton) {

Now start the app and tap the play button. You should start to hear a slighly echoey and somewhat distortion version of your sound playing back. If you get an application crash or just silence you'll want to check your code for errors.

Great, now we have a playing app but we have no way to pause the sound or to alter the mix between the two effects.

Fortunately now that we have the audio pipeline set up the rest is easy.

To add the ability to pause the audio add the following to the pauseButtonTapped function.

@IBAction func pauseButtonTapped(sender: UIButton) {

Finally, to be able to gradually mix between the two effects add the following to the sliderChanged function.

@IBAction func sliderChanged(sender: UISlider) {
    playerA.volume = sender.value
    playerB.volume = 1.0 - sender.value

And there you have it. Mixing between two effects using AVAudioFoundation.

The full source code for this project is available here: https://github.com/robotlovesyou/AudioMixerDemo

What Next?

Here are some ideas for you to try out next

  • Pass the output of one effect into the other before sending it to the output instead of mixing them together using the mixer.
  • Add controls to change the Wet/Dry mix each effect is using.
  • The reverb output sounds quite quiet to me. See if you can find a way to increase the volume of that effect but still mix cleanly between the two. Hint: the documentation for the AVAudioPlayerNode says that the maximum volume is 1.0. Try setting the initial volume for one of the player nodes to 1.0 and then to 2.0. Do you still believe the documentation?
  • Add controls to change the presets the effects are using or to change their settings directly.

Source material

The main source material for this post is the WWDC2014 video "AVAudioEngine in Practice" available at https://developer.apple.com/videos/wwdc/2014/.
The latter part of the video "What's New in Core Audio" may also be of interest.