Multi-timbral AVAudioUnitMIDIInstrument

Swift Language

Multi-timbral AVAudioUnitMIDIInstrument in Swift

Table of Contents

Introduction

Table of Contents

There is one sublcass of AVAudioUnitMIDIInstrument provided by Apple – the AVAudioUnitSampler. The only problem is that it is mono-timbral; it cannot play more than one timbre at a time.

To create a new AVAudioUnit, we need to use a bit of Core Audio.
So, I’ll give you two examples, one using Core Audio and an AUGraph and then one using AVFoundation using AVAudioEngine.

Core Audio Unit

Table of Contents

We need to create an AUGraph and attach nodes to it.

Here’s the first step. Create your class, define instance variables, and create the graph using Core Audio’s C API.

Here is the item we’re interested in. Create a node that’s an Audio Unit Music Device with a subtype MIDISynth and add it to the graph.

And also create the usual io node, kAudioUnitSubType_RemoteIO on iOS, in the same way. I’m not going to bother with a mixer in this example..

Get the audio units from the nodes using AUGraphNodeInfo in order to get/set properties on them later. Then connect them using AUGraphConnectNodeInput.

To load the Sound Font, set the kMusicDeviceProperty_SoundBankURL property on your unit. I’m using a SoundFont from MuseScore here.

The typical Sound Font contains dozens of patches. You don’t really want to load every single one of them. You should pre-load the patches you will actually use. The way to do that is a bit strange. You set the property kAUMIDISynthProperty_EnablePreload to true (1), send MIDI program change messages via MusicDeviceMIDIEvent for the patches you want to load, and then turn off kAUMIDISynthProperty_EnablePreload by setting it to 0. You need to have the AUGraph initialized via AUGraphInitialize before calling this.

Where is this documented? Damned if I know. Do you know? Tell me.

Now when you want to play a note, you send a MIDI program change to tell the synth unit which patch to use.

If you want to play a sequence, the traditional way to do that with an AUGraph is with the Audio Toolbox entities. MusicPlayer will play a MusicSequence. When you create your MusicSequence, you attach it to the AUGraph.

There are examples in my Github project for sending note on/note off messages as well as playing a MusicSequence through the AUGraph.

AVFoundation Unit

Table of Contents

So we know the how to do this in Core Audio. How do you do it in AVFoundation?

The class hierarchy for AVAudioUnitSampler is:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitSampler

So, our AVAudioUnit will be:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitMIDISynth

That part was obvious. What you need to do though is not especially clear. As usual, Apple doesn’t give you a clue. So, this is how I got it to work. I don’t know if this is the “official” method. If you know, tell me.

I’ve noticed that the provided AVAudionUnits work with no-arg inits. So, I decided to create the AudioUnit’s AudioComponentDescription here and pass it up through the hierarchy to have one of those classes (probably AVAudioUnit) initialize it.

AVAudioUnit defines the audioUnit property. We can use that to set the kMusicDeviceProperty_SoundBankURL property for a Sound Font.

Remember that kAUMIDISynthProperty_EnablePreload chacha we did to pre-load patches? We can do that here too.

That’s it.

To use it, attach it to your audio engine.

You can play a sequence via the AVAudioSequencer which is attached to your engine. If you don’t preload your patches, the sequencer will do that for you.

This is how to load a standard MIDI file into the sequencer.

The sequencer can also be created with NSData. This is quite convenient – everyone loves creating an NSMutableData instance and then shove bytes into it. Right?
Have a MusicSequence? Your only option it to turn it into NSData.

This works. If you have a better way, let me know.

Summary

Table of Contents

All this to create an AVAudioUnit subclass.

You should preload the patches you are going to use. If you’re going to use an AVAudioSequencer, you don’t have to; it will do it for you.

Create an AVAudioUnit subclass and pass a Core Audio AudioComponentDescription to a superclass in your init function.

You can access the audioUnit in your AVAudioUnit subclass and set properties on it using Core Audio.

Resources

Table of Contents

Share These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Facebook
  • Twitter
  • LinkedIn
  • email
  • DZone
  • Slashdot
  • Reddit
  • Google Bookmarks
  • Digg
  • StumbleUpon
  • del.icio.us
This entry was posted in AVFoundation, Core Audio, Swift and tagged , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

9 Comments

  1. Zelda
    Posted April 2, 2016 at 12:35 am | Permalink

    I am looking for some way to tell if AVAudioSequencer has finished playing, so I can update the play/stop button. Some sort of notification or completion handler I imagine, have any of you guys had experience with
    this situation.
    Thanks
    Z

    • Gene De Lisa
      Posted April 3, 2016 at 8:58 am | Permalink

      No, Apple wasn’t gracious enough to do that.
      I’d love to be wrong on that, so if anyone else knows a way, let me know.

      Take a look at my example for how to accomplish “rewinding”.

      Specifically look at the play() function in this controller.

  2. SNIP3R
    Posted April 29, 2016 at 10:35 pm | Permalink

    Similar question to Zelda’s, is there a way to setup callbacks for any events on AVAudioSequencer? I’d love a callback when the sequencer is looping, or to be able to get callbacks on certain beats. Can this be added or extended? Thanks for the blog Gene, very helpful!

  3. Sven
    Posted May 2, 2016 at 8:25 pm | Permalink

    Hello,
    Your blog is gold. 🙂
    It looks like you are sending midi events only on channel 0. Isn’t it possible to preload patches in MIDISynth on various channels, and later to send midi notes on those channels? Sending a program change for every note seems to be overkill.

    • Gene De Lisa
      Posted May 3, 2016 at 6:21 pm | Permalink

      As with everything I write, I show an answer that works but not necessarily the answer 🙂

      Did you try it to see if it works? If I weren’t swamped with work projects, I’d do it now.

  4. Posted August 4, 2016 at 8:35 am | Permalink

    Gene, Thanks alot for all your post! They are extremely valuable to the community… .

    I am wondering whether you’ve experienced Memory Leak problems when disposing AudioGraph with SoundFonts and if you’ve managed to overcome it?

    We have a simple program that uses the same procedure as your loadMIDISynthSoundFont() and loadSynthPatches() and everytime AudioGraph is disposed, we experience a 13Mb overhead! (We don’t use AV stuff..). We use the good old “FuildR3 GM2-2.SF2” for this test… .

    Any hints?

    • Gene De Lisa
      Posted August 5, 2016 at 7:38 am | Permalink

      If you’d like to send me an excerpt of your code, maybe just having an extra set of eyeballs on the problem would help.

  5. Korin WH
    Posted August 19, 2016 at 7:44 pm | Permalink

    Thank you Gene! I’ve been reading your blog posts and one you wrote about a year ago or so stated that the AVAudioSequencer technology is unstable so I was wondering what was the status of that? Is it more stable now? Does it have the ability to add or remove events from the Music Sequence?

    What is the difference between the AVAudioEngine.musicSequence property and the one that we attach via AVAudioSequencer?

    Is it possible to use the old Core Audio/Core MIDI APIs (MusicTrack, MusicSequence, MusicPlayer) for play back with AVAudioUnitNodes? We already have an existing architecture in AVAudioEngine, but need add/remove functionality with a sequencer. Would MusicSequence work for that?

    • Gene De Lisa
      Posted August 22, 2016 at 8:19 am | Permalink

      As of Swift 3 beta 6, nothing has been added to AVAudioSequencer and AVMusicTrack. They are for playback only right now. Doug says file a request on Radar – if there are “enough” then they might think about it. I’m sure they are tired of hearing from me about it.

      In the past I tested setting the musicSequence property on AVAudioEngine and it crashed. I suppose they want you to not use it directly. The documentation on that property is awful as usual.

      Yes you can use the Audiotoolbox classes with AVAudioEngine. I have a 6.022 x 10^gazillion github repos with examples.

      Here’s a blog post

      Here’s a a repo

Post a Comment

Your email is never published nor shared.

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">