iOS MIDI interfaces

iOS MIDI Interfaces

Here are a few current MIDI interfaces for iOS devices with Lightning connectors.

If you know of any others, please let me know.

Click the images to go to the product page.

IK Multimedia iRig MIDI 2

iRig Pro Universal AUDIO/MIDI interface for iOS devices and macs

iConnectMIDI1 Lightning Version, 1-in 1-out USB to MIDI

iConnectMIDI2+ Lightning Version, 2-in 2-out

Apple Lightning to USB Camera Adapter

Posted in iOS, MIDI | Leave a comment

Swift 3 Core MIDI

Swift Language

Swift 3 Core MIDI

The Swift 3 betas bring many changes. Here’s an update for Core MIDI based on Swift 3 beta 3.


Introduction

Table of Contents

There aren’t a great number of changes specifically to Core MIDI; it’s just as awful as it’s always been.
Much of what you need to deal with is renamification[sic]. Some of the language types and function signatures have been changed.


Some Changes

Table of Contents

Here is a comparison of MIDIReadBlock which is passed to a virtual destination at creation. As you see, nothing horrible (besides MIDIPacketList still living).

Notifications have been updated.

Still some inconsistencies remain. Compare NotificationCenter.default with MIDINetworkSession.default().

Enums have been renamed. Many enums started with an uppercase character, some didn’t. They all start with lower case characters now.

C Pointers are still there and still a pain. Some members on the Swift side have been renamed, e.g. memory is not pointee.


Summary

Table of Contents

Core MIDI still works. There have been no improvements. You will have to change some names and signatures to get back to where you were.

A final note: if you are using a sound font, Core Audio is less forgiving with sound fonts that have problems, like MusesCore’s General User sound font. (Their Fluid R3 works just fine).


Resources

Table of Contents

Posted in Core MIDI | 3 Responses

MusicSequence via a MIDI Virtual Source

Swift Language

MusicSequence via a MIDI Virtual Source

Virtual MIDI sources and destinations are a bit confusing. This is one way to use a Virtual MIDI source.


Introduction

Table of Contents

If you want other apps to the “see” your app as a MIDI source, i.e. a producer of MIDI data, you need to set up a virtual MIDI source in your app.

Here is the virtual source I’m about to create as it appears in my MIDI monitor app – along with data it just sent. (click thumbnail for full image)

virtual midi source


MIDI Setup

Table of Contents

With Core MIDI, the first thing you need to do is to create a MIDIClient.

MIDIClientCreateWithBlock will save a reference in a variable of type MIDIClientRef. You can also pass in a callback that will be invoked when your MIDI setup changes. You can pass in nil to remain oblivious.

Then (in the success block) you can create a virtual midi source with MIDISourceCreate. The name you pass in here is the name that will appear on other apps’ source list.

You can then send MIDI messages through your source (and on to other apps) with MIDIReceived. Yes at first glance it’s a weird name – you’re sending not receiving, right?
It depends on how you are looking at your MIDI Entity. So, yeah, weird.

Here is an example of sending a Note On message:

The system will assign a unique ID to your endpoints. You retrieve it with MIDIObjectGetIntegerProperty. You can then save it to user defaults, and then read it in the next time your run your app. It will work if you skip this part, but the docs recommend doing it.


MusicSequence

Table of Contents

What if you want to play a MIDI Sequence? You can create a MusicSequence on the fly or read in a standard MIDI file. Then what? Well, there is a function named MusicSequenceSetMIDIEndpoint you can use. Unfortunately, it will work with Virtual Destinations and not Virtual Sources, so the “endpoint” part of the name is a bit misleading. (What else is new?)

So, we need to create a virtual destination and set that as the endpoint of the sequence.

Creating the virtual MIDI destination is almost as easy as the virtual source. You will need to provide a read block that is called when data is sent to the destination. (the unique ID chacha should be done with the destination too).

So, great. When you play your MusicSequence via a MusicPlayer, your virtual destination’s read block will be called with each MIDI event.

What about the virtual source?
Have the read block forward the message (via MIDIReceived) to your virtual source!

Groovy, huh?


Summary

Table of Contents

Yeah, a bit of a game of twister to play a MIDISequence, but it is what it is.

The github project is for OSX, but the same MIDI code will work with iOS.

There are other uses for virtual destinations too. You can write a MIDI Monitor with one since you’re receiving all the MIDI data in its read block. You can then display it, or write trippy animations.


Resources

Table of Contents

Posted in Core MIDI, MIDI, Swift | Leave a comment

Swift script to create a Cocoa window

Swift Language

Swift script to create a Cocoa window

Create a Cocoa window with a functioning button from a command line Swift script.


How To

Table of Contents

You probably know that you can run a swift program from the command line like a script.

Previously you had to use this incantation: “xcrun swift -i myswiftcode.swift”. If you want to compile your swift to an executable when you’re done “scripting”, then you can do this.

Or, simply start your file with a shebang like this.

Do a chmod +x and simply run it.

Can you create an OSX (Cocoa) Window from that “script”?
Yes. Yes you can.

You can go ahead and create an NSWindow and pop it up. Other “tutorials” on the net do this. But it’s really not going to run like a program.

A better way is to create a class that implements the NSApplicationDelegate protocol. Xcode project templates always creates these for you, so you’ve seen and used them. So create your own.

One strategy is to have your appdelegate create and configure an NSWindow. So that’s what this example will show. I’m using the new autolayout constraint syntax, so I’ve shown you how to use the “available” feature to see if your environment can grok them. If not, just use one of the old constraint syntaxes.

To be able to close the app via the red window icon, you need a class that conforms to the NSWindowDelegate protocol. You can observe a window closing notification instead, but setting the window delegate is a bit simpler.

Once you have your appdelegate and its window configured, create an NSApplication, set its delegate to your appdelegate, and run the application.


Summary

Table of Contents

Create a class that implements NSApplicationDelegate. Have that class create and configure an NSWindow.
Then create an NSApplication, set its delegate to your class, and run.


Resources

Table of Contents

Posted in Cocoa, Swift | 2 Responses

Multi-timbral AVAudioUnitMIDIInstrument

Swift Language

Multi-timbral AVAudioUnitMIDIInstrument in Swift

Table of Contents

Introduction

Table of Contents

There is one sublcass of AVAudioUnitMIDIInstrument provided by Apple – the AVAudioUnitSampler. The only problem is that it is mono-timbral; it cannot play more than one timbre at a time.

To create a new AVAudioUnit, we need to use a bit of Core Audio.
So, I’ll give you two examples, one using Core Audio and an AUGraph and then one using AVFoundation using AVAudioEngine.

Core Audio Unit

Table of Contents

We need to create an AUGraph and attach nodes to it.

Here’s the first step. Create your class, define instance variables, and create the graph using Core Audio’s C API.

Here is the item we’re interested in. Create a node that’s an Audio Unit Music Device with a subtype MIDISynth and add it to the graph.

And also create the usual io node, kAudioUnitSubType_RemoteIO on iOS, in the same way. I’m not going to bother with a mixer in this example..

Get the audio units from the nodes using AUGraphNodeInfo in order to get/set properties on them later. Then connect them using AUGraphConnectNodeInput.

To load the Sound Font, set the kMusicDeviceProperty_SoundBankURL property on your unit. I’m using a SoundFont from MuseScore here.

The typical Sound Font contains dozens of patches. You don’t really want to load every single one of them. You should pre-load the patches you will actually use. The way to do that is a bit strange. You set the property kAUMIDISynthProperty_EnablePreload to true (1), send MIDI program change messages via MusicDeviceMIDIEvent for the patches you want to load, and then turn off kAUMIDISynthProperty_EnablePreload by setting it to 0. You need to have the AUGraph initialized via AUGraphInitialize before calling this.

Where is this documented? Damned if I know. Do you know? Tell me.

Now when you want to play a note, you send a MIDI program change to tell the synth unit which patch to use.

If you want to play a sequence, the traditional way to do that with an AUGraph is with the Audio Toolbox entities. MusicPlayer will play a MusicSequence. When you create your MusicSequence, you attach it to the AUGraph.

There are examples in my Github project for sending note on/note off messages as well as playing a MusicSequence through the AUGraph.

AVFoundation Unit

Table of Contents

So we know the how to do this in Core Audio. How do you do it in AVFoundation?

The class hierarchy for AVAudioUnitSampler is:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitSampler

So, our AVAudioUnit will be:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitMIDISynth

That part was obvious. What you need to do though is not especially clear. As usual, Apple doesn’t give you a clue. So, this is how I got it to work. I don’t know if this is the “official” method. If you know, tell me.

I’ve noticed that the provided AVAudionUnits work with no-arg inits. So, I decided to create the AudioUnit’s AudioComponentDescription here and pass it up through the hierarchy to have one of those classes (probably AVAudioUnit) initialize it.

AVAudioUnit defines the audioUnit property. We can use that to set the kMusicDeviceProperty_SoundBankURL property for a Sound Font.

Remember that kAUMIDISynthProperty_EnablePreload chacha we did to pre-load patches? We can do that here too.

That’s it.

To use it, attach it to your audio engine.

You can play a sequence via the AVAudioSequencer which is attached to your engine. If you don’t preload your patches, the sequencer will do that for you.

This is how to load a standard MIDI file into the sequencer.

The sequencer can also be created with NSData. This is quite convenient – everyone loves creating an NSMutableData instance and then shove bytes into it. Right?
Have a MusicSequence? Your only option it to turn it into NSData.

This works. If you have a better way, let me know.

Summary

Table of Contents

All this to create an AVAudioUnit subclass.

You should preload the patches you are going to use. If you’re going to use an AVAudioSequencer, you don’t have to; it will do it for you.

Create an AVAudioUnit subclass and pass a Core Audio AudioComponentDescription to a superclass in your init function.

You can access the audioUnit in your AVAudioUnit subclass and set properties on it using Core Audio.

Resources

Table of Contents

Posted in AVFoundation, Core Audio, Swift | Tagged , | 9 Responses

The Great AVAudioUnitSampler workout

Swift Language

The Great AVAudioUnitSampler workout

Table of Contents

Introduction

Table of Contents

Little by little, AVFoundation audio classes are taking over Core Audio. Unfortunately, the pace is glacial so Core Audio is going to be around for another eon or so.

The AVAudioUnitSampler is the AVFoundation version of the Core Audio kAudioUnitSubType_Sampler AUNode. It is a mono-timbral polyphonic sampler – it plays audio.

With AVFoundation, we create an AVAudioEngine instead of the Core Audio AUGraph. Where the AUGraph had AUNodes attached, the AVAudioEngine has AVAudioNodes attached.

The class hierarchy is:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitSampler

The engine already has a mixer node and an output node attached to it out of the box. We simply need to create the sampler, attach it to the engine, and connect the sampler to the mixer.

Since we’re playing audio, the AVAudioSession needs to be configured for that and activated.

Starting the engine is straightforward.

We probably want to ask for notifications when the engine or session changes. Here is how I do that.

Once the engine has been started, you can send MIDI messages to it. For note on/note off messages, perhaps you added actions on a button for touch down and touch up.

MIDI messages are useful, but for actually playing music, use the new AVAudioSequencer.
Its init method connects it to your engine. Then you load a standard MIDI file into the sequencer. The functions start and stop work as expected. But if you’ve already played the sequence, then say start again, you will hear nothing because the current position is no longer at the beginning of the sequence. Simply reset it to 0.

Sampler from SoundFont

Table of Contents

We need to give the sampler some waveforms to play. We have several options. Let’s start with SoundFonts.

The sampler function loadSoundBankInstrumentAtURL will load a SoundFont.

I use a SoundFont from Musescore. There are many SoundFonts available for download online.

You need to specify a General MIDI patch number or program change number. (See resources.) You also need to specify which bank to use within the SoundFont. I use two Core Audio constants to do this.

Sampler from aupreset

Table of Contents

If you don’t have an aupreset file, ready my blog post on how to create one.

Sampler from sound files

Table of Contents

You can have the sampler load a directory of audio files. If the files are in Core Audio Format (caf), you can embed range metadata in each file. A simpler method is to simply name the files with the root pitch at the end of the basename. So, violinC4.wav would map to C4 or middle c.

Multiple voices

Table of Contents

Currently, the sampler is the only subclass of AVAudioUnitMIDIInstrument. There is no equivalent to the multitimbral kAudioUnitSubType_DLSSynth or kAudioUnitSubType_MIDISynth audio units.

What you can do is attach multiple AVAudioUnitSampler instances to the engine.
Something like this:

But what about using a sequencer with that and have the individual tracks use different timbres?
You’d have to create a custom subclass of AVAudioUnitMIDIInstrument perhaps configured as a kAudioUnitSubType_DLSSynth or kAudioUnitSubType_MIDISynth which are multi-timbral audio units.

What a coincidence! My next blog post is about creating a multi-timbral AVAudioUnitMIDIInstrument using kAudioUnitSubType_MIDISynth.

Or you can just use AVMIDIPlayer which uses a sound font and reads a MIDI file.

Summary

Table of Contents

The AVAudioUnitSampler is useful, but needs improvement – especially when used with AVAudioSequencer.

Resources

Table of Contents

Posted in AVFoundation, Swift | Tagged , | 2 Responses

Creating an aupreset

Swift Language

Using AU Lab to create an aupreset

Just fire up AU Lab. The UI is totally intuitive, amirite?


Introduction

Table of Contents

Here are the steps to create an aupreset that consists of several audio files. We will set which MIDI key will trigger the individual files.

  • Fire up AU Lab


  • The download link is under resources if you don’t have it.
    Launch it.

  • Create a new document


  • Choose factory configuration “Stereo Out”
    Set audio input device to None
    createDocument

  • Add Instrument

  • Choose “Add Audio Unit Instrument” from the “Edit” menu.
    Set the instrument type to Apple->AUSampler.
    Leave the MIDI Input Source to Any controller.
    addInstrument

  • Changing the Default Instrument

  • You should now see the keyboard. Press some keys and you’ll hear the default sine wav.

    Now you need to replace the sine wave with your sound file(s).
    There are 3 icons under the keyboard on the right. Press the rightmost icon that looks like a keyboard to bring up the Zone and Layers editor.
    Under Layer 1 you should see “Sine 440 Built-In” for the samples.
    On the bottom left, under the Zone and Layers tree control, you should see a + and – button.

    samplerEditor

    With the Sine wave selected, press the + button to add your sound file.
    When you press the keys on the keyboard now, you will hear the sine wave and your sound file.
    Select the Sine wave and press the – button to delete it.

  • Key Range

  • You set the key that will trigger your sound file as-is by setting the Root. Set it to C4, and when you press C4 on your keyboard, your sound will play. Play C5 and it will be resampled an octave higher. Maybe you want this, maybe not. That’s why you set the range and root to something that is acceptable to you.

    If you want to create a “drum machine”, where each key is a different drum patch, then you set the root to the key you’d like, but also the range to be that key too. So, for C4, the range is C4-C4 and the root is C4. You will hear your patch only when C4 is pressed.
    wavMapped

  • Save Preset


  • There are 4 combo boxes at the top of the window. The third one labled Untitled by default is how you save your preset. Press it and choose Save Preset As… from the popup. Type a name, and choose User among the radio boxes.

    savePreset

    By choosing User, your preset file will be saved to ~/Library/Audio/Presets/Apple/AUSampler/
    The aupreset file is just a plist. Go ahead an look at it. Check out the file paths for your samples.

    So, how does this work on iOS when those paths don’t exist?

  • File Paths

  • According to Tech Note TN2283, the AUSampler will use these rules to resolve each path:

    • If the audio file is found at the original path, it is loaded.
    • If the audio file is NOT found, the AUSampler looks to see if a path includes a portion matching “/Sounds/”, “/Sampler Files/” or “/Apple Loops/” in that order.
    • If the path DOES NOT include one of the listed sub-paths, an error is returned.
    • If the path DOES include one of the listed sub-paths, the portion of the path preceding the sub-path is removed and the following directory location constants are substituted in the following order:

    Bundle Directory
    NSLibraryDirectory (NOTE: Only on OS X)
    NSDocumentDirectory
    NSDownloadsDirectory

    In an iOS application let’s say the original path in the aupreset is ~/Library/Audio/Sounds/bang.caf.
    The AUSampler would then search for the audio file in the following places:

    <Bundle_Directory>/Sounds/bang.caf
    <NSDocumentDirectory>/Sounds/bang.caf
    <NSDownloadsDirectory>/Sounds/bang.caf

    tl;dr Create a Sounds directory and place your samples there.


Summary

Table of Contents

Add sample files to an instrument in AU Lab. One of the things you can do is set the range
and root pitch.


Resources

Table of Contents

AU Lab download
currently version 2.3 from 2012

WWDC 2011 video viewable in Safari only.

Posted in Apple, Core Audio | Tagged , , | 1 Response

Swift 2: AVFoundation to play audio or MIDI

Swift Language

Swift AVFoundation

There are many ways to play sound in iOS. Core Audio has been around for a while and it is very powerful. It is a C API, so using it from Objective-C and Swift is possible, but awkward. Apple has been moving towards a higher level API with AVFoundation. Here I will summarize how to use AVFoundation for several common audio tasks.

N.B. Some of these examples use new capabilities of iOS 8.

This is a newer version of this Swift 1 blog post.

Playing an Audio file

Let’s start by loading an audio file with an AVAudioPlayer instance. There are several audio formats that the player will grok. I had trouble with a few MP3 files that played in iTunes or VLC, but caused a cryptic exception in the player. So, check your source audio files first.

If you want other formats, your Mac has a converter named afconvert. See the man page.

Let’s go step by step.

Get the file URL.

Create the player. You will need to make the player an instance variable. If you just use a local variable, it will be popped off the stack before you hear anything!

You can provide the player a hint for how to parse the audio data. There are several constants for file type UTIs you can use. For our MP3 file, we’ll use AVFileTypeMPEGLayer3.

Now configure the player. prepareToPlay() “pre-rolls” the audio file to reduce start up delays when you finally call play().
You can set the player’s delegate to track status.

To set the delegate you have to make a class implement the player delegate protocol. My class has the clever name “Sound”. The delegate protocol requires the NSObjectProtocol, so Sound is a subclass of NSObject.

Finally, the transport controls that can be called from an action.

Audio Session

The Audio Session singleton is an intermediary between your app and the media daemon. Your app and all other apps (should) make requests to the shared session. Since we are playing an audio file, we should tell the session that is our intention by requesting that its category be AVAudioSessionCategoryPlayback, and then make the session active. You should do this in the code above right before you call play() on the player.

Setting a session for playback.

Go to Table of Contents

Playing a MIDI file

You use AVMIDIPlayer to play standard MIDI files. Loading the player is similar to loading the AVAudioPlayer. You need to load a soundbank from a Soundfont or DLS file. The player also has a pre-roll prepareToPlay() function.

I’m not interested in copyright infringement, so I have not included either a DLS or SF2 file. So do a web search for a GM SoundFont2 file. They are loaded in the same manner. I’ve tried the MuseCore SoundFont and it sounds ok. There is probably a General MIDI DLS on your OSX system already: /System/Library/Components/CoreAudio.component/Contents/Resources/gs_instruments.dls. Copy this to the project bundle if you want to try it.

Go to Table of Contents

Audio Engine

iOS 8 introduces a new audio engine which seems to be the successor to Core Audio’s AUGraph and friends. See my article on using these classes in Swift.

The new AVAudioEngine class is the analog to AUGraph. You create AudioNode instances and attach them to the engine. Then you start the engine to initiate data flow.

Here is an engine that has a player node attached to it. The player node is attached to the engine’s mixer. These are instance variables.

Then you need to start the engine.

Cool. Silence.

Let’s give it something to play. It can be an audio file, or as we’ll see, a MIDI file or a computed buffer.
In this example we create an AVAudioFile instance from an MP3 file, and tell the playerNode to play it.

First, load an audio file. Or load an audio file into a buffer.

Now we hand the buffer to the player node by “scheduling” it, then playing it.

There are quite a few variations on scheduleBuffer. Have fun trying them out.

Go to Table of Contents

Playing MIDI Notes

How about triggering MIDI notes/events based on UI events? You need an instance of AVAudioUnitMIDIInstrument among your nodes. There is one concrete subclass named AVAudioUnitSampler. Create a sampler and attach it to the engine.

In your UI’s action function, load the appropriate instrument into the sampler. The program parameter is a General MIDI instrument number. You might want to set up constants. Soundbanks have banks of sound. You need to specify which bank to use with the bankMSB and bankLSB. I use Core Audio constants here to choose the “melodic” bank and not the “percussion” bank.

Then send a MIDI program change by calling our load function. After that, you can send startNote and stopNote messages to the sampler. You need to match the parameters for each start and stop message.

Go to Table of Contents

Summary

This is a good start I hope. There are other things I’ll cover soon, such as generating and processing the audio buffer data.

Resources

Go to Table of Contents

Posted in Core MIDI, Swift | Tagged , , | 10 Responses

Java 9 jshell OSX bug workaround

Java logo

Java 9 jshell

You’ve downloaded the current build of Java 9, and perhaps Kulla. You try to run jshell and blammo. Stack dump.


Introduction

Table of Contents

So, you’ve installed Java 9 on your Mac. Maybe one of the Early Access builds. I’m playing around with modules, so I’m using the Jigsaw version.

Let’s check.

Let’s run jshell.

D’oh!

Ok, let’s try it with a pre-built kulla.jar from the AdoptOpenJDK Cloudbees instance

Same nonsense.

I even downloaded the kulla sources and built them. No difference.


The Workaround

Table of Contents

Add your hostname to /etc/hosts.

(My hostname is rockhopper – the penguin of course, not the bike).


Summary

Table of Contents

A simple /etc/hosts one liner fixes the problem.

Yay! Now I can use Java as I’ve used LISP since the 80s!


Resources

Table of Contents

Posted in Java | Tagged , | 5 Responses

Multiple Java VMs on OSX

Java logo

Multiple Java VM on OSX


Introduction

Table of Contents

Let’s start by reviewing the baroque installation of Java on OSX.

Try these (highlighted) commands in a terminal.

So, when you install Java, /usr/bin/java is the vm command. It’s a symbolic link to the “current version”.
In my case Current is a symlink to a directory named A.

A lot of “legacy” links in there. Right now, “Current” is the one we care about.

Let’s see what version your current default vm happens to be:

If you simply run java -version (no path), you get the same output.

In my case, I installed the OpenJDK preview of Java 9.

But wait. In the directory listing for /System/Library/Frameworks/JavaVM.framework/Versions there was no Java 9. Where are the 1.7+ VMs?

You can use /usr/libexec/java_home to find the names of your installed VMs.

So, the “newer” i.e. current VMs are in /Library/Java/JavaVirtualMachines.
That last line shows your current “default” VM. Run java_home with no arguments to verify.

OK. So what?
Let’s see what java_home with the -v (lower case v this time) flag and a VM version gives us.
(use the vm name in /Library/Java/JavaVirtualMachines without the jdk prefix)

So, this gives us the full path to the installed VMs.

What about Java 9? Well, OpenJDK’s Java 9 uses a different naming convention, so you simply use 9 as the version.

This is a way to set your environment variables in your shell login config file (.e.g. ~/.bash_profile, NB not .bashrc).

Of course, if you want to change your VM “on the fly”, you’ll have to remove the old VM path from PATH. Unfortunately, even with Bash you have to engage in some sed nonsense. If you know an easier way (than sed), let me know.

So, cool. In the terminal, you get the VM you want. What about things that aren’t run from the terminal. Like an IDE? If you run Eclipse with the current snapshot of Java 9, it will crash. Setting your environment variables in .bash_profile does not affect these launches.


Eclipse

Table of Contents

If you have Java 9 installed, you won’t be able to run Eclipse. The solution is to edit Eclipse’s config file to use a specific VM. Here is mine. I added the lines -vm and the path to Java 1.8.


Global variables

Table of Contents

How do you set environment variables globally on OSX?

The current way (in El Capitan) is to create a plist in ~/Library/LaunchAgents/ that will be read by launchctl. In older versions of OSX, you edited /etc/launchd.conf

Eclipse seems to ignore these variables though. The eclipse.ini trick works.


Summary

Table of Contents

Java on OSX is a bit of a mess.


Resources

Table of Contents

Posted in Java | Leave a comment