Audio Units (AUv3) MIDI extension – Part 2: C++

AUv3 C++

If you have looked at any iOS or macOS Audio Unit code on the net you have probably seen .mm and .hpp files.
What’s up with that?

Introduction

Table of Contents

Apple is still saying that Swift is the language of the future and that it out performs those languages used by geezers. The sad truth is that Swift will be the language of the future – in the future – now now. It simply will not work well for audio rendering. Nor with ABI instability work well as plugins. (What if the host uses a different version from the plugin?).

Actually Objective-C is not good for audio rendering either. Due to the way message sending works, there is overhead that could lead to blocking. You don’t want blocking in an audio thread – especially if you’re wearing headphones and the volume is cranked up. Ok, maybe some maniacs do. I’ll ignore them.

So, what’s left? Well, there is C. C works. A better option is C++, because, well, it’s ++. You may be put off by it – in my case I had been using it for 9 years and was actually teaching a C++ at AT&T when Sun came in to demo Java in 1995. I left C++ for Java at that time, so I know what the irritations are. But C++ has come a long way and it really is a robust language. And in our case, it’s the right tool to use right now. Maybe not in the future, but we are living right now and not in the future.

Using C++

Table of Contents

When you want to create C++ via XCode, maybe you want to create a .cpp file and a .h file like the books say. You can certainly do that. If you look at the v2 audio unit examples from Apple, that’s what you will see. Here’s is an example from AudioKit’s core classes.

But the context we’re in right now is creating an AUv3 audio unit. The XCode template code for your audio unit is in Objective-C. You’re going to want to call your C++ code from this Objective-C code. So how do you do that?

First, rename your audio unit .m file to use a .mm extension. If you look at the inspector panel, it says the type is now Objective-C++. That’s not a new language, it just means you will be able to call C++ code from it. You’ve probably seen .mm files in audio code on the net. For example, here is AudioKit’s FM Oscillator Bank.

Now for the C++ code. Where does that go? The convention here is to create a .hpp file. So, go ahead and create one. The only thing that the XCode template will generate are a few directives. You’ll have to create the actual class.

An hpp file is a bit like Java or Swift where the interface and implementation are together. If you prefer the old school style of putting the interface in the .h/.hpp file and the implementation in the .cpp/.mm file, you can.

Here is a simple example: IntervalPlugin.hpp

Then in your Objective-C file, you include this header file along with the other headers that you need.
In the implementation, define an instance variable of your C++ class type. You don’t want it to be an Objective-C property which would be copied on access.

Then later, you can create an instance of your C++ using the C++ keyword new.

Then, capture this instance variable in your render block accessor.

The audio unit’s AUHostMusicalContextBlock is one of the blocks handed to the audio unit by the host. We capture that too so we can use it in the render block to see what the current tempo is among other timing data. In this example, I pass the tempo to the C++ plugin. In the next blog post on audio units, I’ll talk more about timing.

Now that we can call C++ from our render block, we can put that MIDI processing stuff from the previous example into the C++ plugin. Something like this.

Then, over in C++ land,

If your C++ code is not compiling because a standard header like iostream or vector is not found, you forgot to rename your Objective-C++ file (the audio unit) to have a .mm extension!

Of course, the outputEventBlock was handed to the C++ object by the audio unit Objective-C class. The alloc render resources method is a good place to do that.

Neat. Swift, Objective-C, and C++ in one project.

Summary

Table of Contents

Using C++ in an audio project for iOS or macOS is encouraged. It’s not especially difficult to set up, but there are a few tricks that we have just seen.

Resources

Table of Contents

There is a separate target in the Github project named AUParamsPlugin which has the C++ version. The AUParams target is from the last blog post and has no C++

Github repo

Previous posts in this series

Audio Units (AUv3) MIDI extension – Part 0: Getting Started

Audio Units (AUv3) MIDI extension – Part 1: Parameters

3 thoughts on “Audio Units (AUv3) MIDI extension – Part 2: C++”

  1. Hi Gene,

    First of all, many thanks for these tutorials.

    I found an issue with this code that I hope you can confirm. In internalRenderBlock you store _musicalContextCapture , that you later use in the actual render block. If the DAW does not supply a musicalContextBlock, well, then you cannot get any tempo information.

    Using this code I was able to get tempo information from Garageband and AUM, but not from Cubasis or Beatmaker 3. My initial impression was that Cubasis and Beatmaker 3 do not provide a musicalContextBlock.

    This is not the case. These apps do provide a musicalContextBlock, just not yet at the moment that internalRenderBlock is called.

    I reached out to Cubasis, and one of their developers informed me that they set musicalContextBlock right before they call allocateRenderResources.

    Notice that in your other blog post, “Audio Units (AUv3) MIDI extension – Part 0: Getting Started”, in the function allocateRenderResourcesAndReturnError,
    you stored the musicalContextBlock into _musicalContext. Indeed a non nil musicalContextBlock can be captured here in all cases.

    So, in your actual code for the render function I replaced _musicalContextCapture with self->_musicalContext and indeed now I can capture tempo information from Garageband, AUM, Cubasis and Beatmaker 3.

    However it leaves some questions.

    Is it safe to refer to self here? Am I not accessing objective C code here that I would like to avoid?

    And, are there specific reasons why you choose to store _musicalContext in allocateRenderResourcesAndReturnError and not use it in your render block?

    Best regards,
    Arno van Goch

    1. I was trying to find a black cat at midnight during the new moon in a room without windows. So, I was trying everything to see if it worked. Each host implementation did things a bit differently thanks to the – cough cough – “documentation”. My guess was that by the time allocateRenderResourcesAndReturnError was called, the client would have the musicalContext. In reality, yes and no. Sigh. So, what’s the “portable” solution? I don’t know because I haven’t checked the hosts’ implementations for several months since I’ve needed to develop other things to feed the cats. They eat a lot.

      So, does both Cubasis and BM3 now give you the musicalContext before allocateRender… like most people would think would be the case?

      ps your last question. I was trying things and probably just forgot to delete it. Edit the same code 1000x and your eyes start to get blurry…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.