Multi-timbral AVAudioUnitMIDIInstrument

Swift Language

Multi-timbral AVAudioUnitMIDIInstrument in Swift

Table of Contents

Introduction

Table of Contents

There is one sublcass of AVAudioUnitMIDIInstrument provided by Apple – the AVAudioUnitSampler. The only problem is that it is mono-timbral; it cannot play more than one timbre at a time.

To create a new AVAudioUnit, we need to use a bit of Core Audio.
So, I’ll give you two examples, one using Core Audio and an AUGraph and then one using AVFoundation using AVAudioEngine.

Core Audio Unit

Table of Contents

We need to create an AUGraph and attach nodes to it.

Here’s the first step. Create your class, define instance variables, and create the graph using Core Audio’s C API.

Here is the item we’re interested in. Create a node that’s an Audio Unit Music Device with a subtype MIDISynth and add it to the graph.

And also create the usual io node, kAudioUnitSubType_RemoteIO on iOS, in the same way. I’m not going to bother with a mixer in this example..

Get the audio units from the nodes using AUGraphNodeInfo in order to get/set properties on them later. Then connect them using AUGraphConnectNodeInput.

To load the Sound Font, set the kMusicDeviceProperty_SoundBankURL property on your unit. I’m using a SoundFont from MuseScore here.

The typical Sound Font contains dozens of patches. You don’t really want to load every single one of them. You should pre-load the patches you will actually use. The way to do that is a bit strange. You set the property kAUMIDISynthProperty_EnablePreload to true (1), send MIDI program change messages via MusicDeviceMIDIEvent for the patches you want to load, and then turn off kAUMIDISynthProperty_EnablePreload by setting it to 0. You need to have the AUGraph initialized via AUGraphInitialize before calling this.

Where is this documented? Damned if I know. Do you know? Tell me.

Now when you want to play a note, you send a MIDI program change to tell the synth unit which patch to use.

If you want to play a sequence, the traditional way to do that with an AUGraph is with the Audio Toolbox entities. MusicPlayer will play a MusicSequence. When you create your MusicSequence, you attach it to the AUGraph.

There are examples in my Github project for sending note on/note off messages as well as playing a MusicSequence through the AUGraph.

AVFoundation Unit

Table of Contents

So we know the how to do this in Core Audio. How do you do it in AVFoundation?

The class hierarchy for AVAudioUnitSampler is:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitSampler

So, our AVAudioUnit will be:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitMIDISynth

That part was obvious. What you need to do though is not especially clear. As usual, Apple doesn’t give you a clue. So, this is how I got it to work. I don’t know if this is the “official” method. If you know, tell me.

I’ve noticed that the provided AVAudionUnits work with no-arg inits. So, I decided to create the AudioUnit’s AudioComponentDescription here and pass it up through the hierarchy to have one of those classes (probably AVAudioUnit) initialize it.

AVAudioUnit defines the audioUnit property. We can use that to set the kMusicDeviceProperty_SoundBankURL property for a Sound Font.

Remember that kAUMIDISynthProperty_EnablePreload chacha we did to pre-load patches? We can do that here too.

That’s it.

To use it, attach it to your audio engine.

You can play a sequence via the AVAudioSequencer which is attached to your engine. If you don’t preload your patches, the sequencer will do that for you.

This is how to load a standard MIDI file into the sequencer.

The sequencer can also be created with NSData. This is quite convenient – everyone loves creating an NSMutableData instance and then shove bytes into it. Right?
Have a MusicSequence? Your only option it to turn it into NSData.

This works. If you have a better way, let me know.

Summary

Table of Contents

All this to create an AVAudioUnit subclass.

You should preload the patches you are going to use. If you’re going to use an AVAudioSequencer, you don’t have to; it will do it for you.

Create an AVAudioUnit subclass and pass a Core Audio AudioComponentDescription to a superclass in your init function.

You can access the audioUnit in your AVAudioUnit subclass and set properties on it using Core Audio.

Resources

Table of Contents

Posted in AVFoundation, Core Audio, Swift | Tagged , | Leave a comment

The Great AVAudioUnitSampler workout

Swift Language

The Great AVAudioUnitSampler workout

Table of Contents

Introduction

Table of Contents

Little by little, AVFoundation audio classes are taking over Core Audio. Unfortunately, the pace is glacial so Core Audio is going to be around for another eon or so.

The AVAudioUnitSampler is the AVFoundation version of the Core Audio kAudioUnitSubType_Sampler AUNode. It is a mono-timbral polyphonic sampler – it plays audio.

With AVFoundation, we create an AVAudioEngine instead of the Core Audio AUGraph. Where the AUGraph had AUNodes attached, the AVAudioEngine has AVAudioNodes attached.

The class hierarchy is:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitSampler

The engine already has a mixer node and an output node attached to it out of the box. We simply need to create the sampler, attach it to the engine, and connect the sampler to the mixer.

Since we’re playing audio, the AVAudioSession needs to be configured for that and activated.

Starting the engine is straightforward.

We probably want to ask for notifications when the engine or session changes. Here is how I do that.

Once the engine has been started, you can send MIDI messages to it. For note on/note off messages, perhaps you added actions on a button for touch down and touch up.

MIDI messages are useful, but for actually playing music, use the new AVAudioSequencer.
Its init method connects it to your engine. Then you load a standard MIDI file into the sequencer. The functions start and stop work as expected. But if you’ve already played the sequence, then say start again, you will hear nothing because the current position is no longer at the beginning of the sequence. Simply reset it to 0.

Sampler from SoundFont

Table of Contents

We need to give the sampler some waveforms to play. We have several options. Let’s start with SoundFonts.

The sampler function loadSoundBankInstrumentAtURL will load a SoundFont.

I use a SoundFont from Musescore. There are many SoundFonts available for download online.

You need to specify a General MIDI patch number or program change number. (See resources.) You also need to specify which bank to use within the SoundFont. I use two Core Audio constants to do this.

Sampler from aupreset

Table of Contents

If you don’t have an aupreset file, ready my blog post on how to create one.

Sampler from sound files

Table of Contents

You can have the sampler load a directory of audio files. If the files are in Core Audio Format (caf), you can embed range metadata in each file. A simpler method is to simply name the files with the root pitch at the end of the basename. So, violinC4.wav would map to C4 or middle c.

Multiple voices

Table of Contents

Currently, the sampler is the only subclass of AVAudioUnitMIDIInstrument. There is no equivalent to the multitimbral kAudioUnitSubType_DLSSynth or kAudioUnitSubType_MIDISynth audio units.

What you can do is attach multiple AVAudioUnitSampler instances to the engine.
Something like this:

But what about using a sequencer with that and have the individual tracks use different timbres?
You’d have to create a custom subclass of AVAudioUnitMIDIInstrument perhaps configured as a kAudioUnitSubType_DLSSynth or kAudioUnitSubType_MIDISynth which are multi-timbral audio units.

What a coincidence! My next blog post is about creating a multi-timbral AVAudioUnitMIDIInstrument using kAudioUnitSubType_MIDISynth.

Or you can just use AVMIDIPlayer which uses a sound font and reads a MIDI file.

Summary

Table of Contents

The AVAudioUnitSampler is useful, but needs improvement – especially when used with AVAudioSequencer.

Resources

Table of Contents

Posted in AVFoundation, Swift | Tagged , | Leave a comment

Creating an aupreset

Swift Language

Using AU Lab to create an aupreset

Just fire up AU Lab. The UI is totally intuitive, amirite?


Introduction

Table of Contents

Here are the steps to create an aupreset that consists of several audio files. We will set which MIDI key will trigger the individual files.

  • Fire up AU Lab


  • The download link is under resources if you don’t have it.
    Launch it.

  • Create a new document


  • Choose factory configuration “Stereo Out”
    Set audio input device to None
    createDocument

  • Add Instrument

  • Choose “Add Audio Unit Instrument” from the “Edit” menu.
    Set the instrument type to Apple->AUSampler.
    Leave the MIDI Input Source to Any controller.
    addInstrument

  • Changing the Default Instrument

  • You should now see the keyboard. Press some keys and you’ll hear the default sine wav.

    Now you need to replace the sine wave with your sound file(s).
    There are 3 icons under the keyboard on the right. Press the rightmost icon that looks like a keyboard to bring up the Zone and Layers editor.
    Under Layer 1 you should see “Sine 440 Built-In” for the samples.
    On the bottom left, under the Zone and Layers tree control, you should see a + and – button.

    samplerEditor

    With the Sine wave selected, press the + button to add your sound file.
    When you press the keys on the keyboard now, you will hear the sine wave and your sound file.
    Select the Sine wave and press the – button to delete it.

  • Key Range

  • You set the key that will trigger your sound file as-is by setting the Root. Set it to C4, and when you press C4 on your keyboard, your sound will play. Play C5 and it will be resampled an octave higher. Maybe you want this, maybe not. That’s why you set the range and root to something that is acceptable to you.

    If you want to create a “drum machine”, where each key is a different drum patch, then you set the root to the key you’d like, but also the range to be that key too. So, for C4, the range is C4-C4 and the root is C4. You will hear your patch only when C4 is pressed.
    wavMapped

  • Save Preset


  • There are 4 combo boxes at the top of the window. The third one labled Untitled by default is how you save your preset. Press it and choose Save Preset As… from the popup. Type a name, and choose User among the radio boxes.

    savePreset

    By choosing User, your preset file will be saved to ~/Library/Audio/Presets/Apple/AUSampler/
    The aupreset file is just a plist. Go ahead an look at it. Check out the file paths for your samples.

    So, how does this work on iOS when those paths don’t exist?

  • File Paths

  • According to Tech Note TN2283, the AUSampler will use these rules to resolve each path:

    • If the audio file is found at the original path, it is loaded.
    • If the audio file is NOT found, the AUSampler looks to see if a path includes a portion matching “/Sounds/”, “/Sampler Files/” or “/Apple Loops/” in that order.
    • If the path DOES NOT include one of the listed sub-paths, an error is returned.
    • If the path DOES include one of the listed sub-paths, the portion of the path preceding the sub-path is removed and the following directory location constants are substituted in the following order:

    Bundle Directory
    NSLibraryDirectory (NOTE: Only on OS X)
    NSDocumentDirectory
    NSDownloadsDirectory

    In an iOS application let’s say the original path in the aupreset is ~/Library/Audio/Sounds/bang.caf.
    The AUSampler would then search for the audio file in the following places:

    <Bundle_Directory>/Sounds/bang.caf
    <NSDocumentDirectory>/Sounds/bang.caf
    <NSDownloadsDirectory>/Sounds/bang.caf

    tl;dr Create a Sounds directory and place your samples there.


Summary

Table of Contents

Add sample files to an instrument in AU Lab. One of the things you can do is set the range
and root pitch.


Resources

Table of Contents

AU Lab download
currently version 2.3 from 2012

WWDC 2011 video viewable in Safari only.

Posted in Apple, Core Audio | Tagged , , | 1 Response

Swift 2: AVFoundation to play audio or MIDI

Swift Language

Swift AVFoundation

There are many ways to play sound in iOS. Core Audio has been around for a while and it is very powerful. It is a C API, so using it from Objective-C and Swift is possible, but awkward. Apple has been moving towards a higher level API with AVFoundation. Here I will summarize how to use AVFoundation for several common audio tasks.

N.B. Some of these examples use new capabilities of iOS 8.

This is a newer version of this Swift 1 blog post.

Playing an Audio file

Let’s start by loading an audio file with an AVAudioPlayer instance. There are several audio formats that the player will grok. I had trouble with a few MP3 files that played in iTunes or VLC, but caused a cryptic exception in the player. So, check your source audio files first.

If you want other formats, your Mac has a converter named afconvert. See the man page.

Let’s go step by step.

Get the file URL.

Create the player. You will need to make the player an instance variable. If you just use a local variable, it will be popped off the stack before you hear anything!

You can provide the player a hint for how to parse the audio data. There are several constants for file type UTIs you can use. For our MP3 file, we’ll use AVFileTypeMPEGLayer3.

Now configure the player. prepareToPlay() “pre-rolls” the audio file to reduce start up delays when you finally call play().
You can set the player’s delegate to track status.

To set the delegate you have to make a class implement the player delegate protocol. My class has the clever name “Sound”. The delegate protocol requires the NSObjectProtocol, so Sound is a subclass of NSObject.

Finally, the transport controls that can be called from an action.

Audio Session

The Audio Session singleton is an intermediary between your app and the media daemon. Your app and all other apps (should) make requests to the shared session. Since we are playing an audio file, we should tell the session that is our intention by requesting that its category be AVAudioSessionCategoryPlayback, and then make the session active. You should do this in the code above right before you call play() on the player.

Setting a session for playback.

Go to Table of Contents

Playing a MIDI file

You use AVMIDIPlayer to play standard MIDI files. Loading the player is similar to loading the AVAudioPlayer. You need to load a soundbank from a Soundfont or DLS file. The player also has a pre-roll prepareToPlay() function.

I’m not interested in copyright infringement, so I have not included either a DLS or SF2 file. So do a web search for a GM SoundFont2 file. They are loaded in the same manner. I’ve tried the MuseCore SoundFont and it sounds ok. There is probably a General MIDI DLS on your OSX system already: /System/Library/Components/CoreAudio.component/Contents/Resources/gs_instruments.dls. Copy this to the project bundle if you want to try it.

Go to Table of Contents

Audio Engine

iOS 8 introduces a new audio engine which seems to be the successor to Core Audio’s AUGraph and friends. See my article on using these classes in Swift.

The new AVAudioEngine class is the analog to AUGraph. You create AudioNode instances and attach them to the engine. Then you start the engine to initiate data flow.

Here is an engine that has a player node attached to it. The player node is attached to the engine’s mixer. These are instance variables.

Then you need to start the engine.

Cool. Silence.

Let’s give it something to play. It can be an audio file, or as we’ll see, a MIDI file or a computed buffer.
In this example we create an AVAudioFile instance from an MP3 file, and tell the playerNode to play it.

First, load an audio file. Or load an audio file into a buffer.

Now we hand the buffer to the player node by “scheduling” it, then playing it.

There are quite a few variations on scheduleBuffer. Have fun trying them out.

Go to Table of Contents

Playing MIDI Notes

How about triggering MIDI notes/events based on UI events? You need an instance of AVAudioUnitMIDIInstrument among your nodes. There is one concrete subclass named AVAudioUnitSampler. Create a sampler and attach it to the engine.

In your UI’s action function, load the appropriate instrument into the sampler. The program parameter is a General MIDI instrument number. You might want to set up constants. Soundbanks have banks of sound. You need to specify which bank to use with the bankMSB and bankLSB. I use Core Audio constants here to choose the “melodic” bank and not the “percussion” bank.

Then send a MIDI program change by calling our load function. After that, you can send startNote and stopNote messages to the sampler. You need to match the parameters for each start and stop message.

Go to Table of Contents

Summary

This is a good start I hope. There are other things I’ll cover soon, such as generating and processing the audio buffer data.

Resources

Go to Table of Contents

Posted in Core MIDI, Swift | Tagged , , | 10 Responses

Java 9 jshell OSX bug workaround

Java logo

Java 9 jshell

You’ve downloaded the current build of Java 9, and perhaps Kulla. You try to run jshell and blammo. Stack dump.


Introduction

Table of Contents

So, you’ve installed Java 9 on your Mac. Maybe one of the Early Access builds. I’m playing around with modules, so I’m using the Jigsaw version.

Let’s check.

Let’s run jshell.

D’oh!

Ok, let’s try it with a pre-built kulla.jar from the AdoptOpenJDK Cloudbees instance

Same nonsense.

I even downloaded the kulla sources and built them. No difference.


The Workaround

Table of Contents

Add your hostname to /etc/hosts.

(My hostname is rockhopper – the penguin of course, not the bike).


Summary

Table of Contents

A simple /etc/hosts one liner fixes the problem.

Yay! Now I can use Java as I’ve used LISP since the 80s!


Resources

Table of Contents

Posted in Java | Tagged , | Leave a comment

Multiple Java VMs on OSX

Java logo

Multiple Java VM on OSX


Introduction

Table of Contents

Let’s start by reviewing the baroque installation of Java on OSX.

Try these (highlighted) commands in a terminal.

So, when you install Java, /usr/bin/java is the vm command. It’s a symbolic link to the “current version”.
In my case Current is a symlink to a directory named A.

A lot of “legacy” links in there. Right now, “Current” is the one we care about.

Let’s see what version your current default vm happens to be:

If you simply run java -version (no path), you get the same output.

In my case, I installed the OpenJDK preview of Java 9.

But wait. In the directory listing for /System/Library/Frameworks/JavaVM.framework/Versions there was no Java 9. Where are the 1.7+ VMs?

You can use /usr/libexec/java_home to find the names of your installed VMs.

So, the “newer” i.e. current VMs are in /Library/Java/JavaVirtualMachines.
That last line shows your current “default” VM. Run java_home with no arguments to verify.

OK. So what?
Let’s see what java_home with the -v (lower case v this time) flag and a VM version gives us.
(use the vm name in /Library/Java/JavaVirtualMachines without the jdk prefix)

So, this gives us the full path to the installed VMs.

What about Java 9? Well, OpenJDK’s Java 9 uses a different naming convention, so you simply use 9 as the version.

This is a way to set your environment variables in your shell login config file (.e.g. ~/.bash_profile, NB not .bashrc).

Of course, if you want to change your VM “on the fly”, you’ll have to remove the old VM path from PATH. Unfortunately, even with Bash you have to engage in some sed nonsense. If you know an easier way (than sed), let me know.

So, cool. In the terminal, you get the VM you want. What about things that aren’t run from the terminal. Like an IDE? If you run Eclipse with the current snapshot of Java 9, it will crash. Setting your environment variables in .bash_profile does not affect these launches.


Eclipse

Table of Contents

If you have Java 9 installed, you won’t be able to run Eclipse. The solution is to edit Eclipse’s config file to use a specific VM. Here is mine. I added the lines -vm and the path to Java 1.8.


Global variables

Table of Contents

How do you set environment variables globally on OSX?

The current way (in El Capitan) is to create a plist in ~/Library/LaunchAgents/ that will be read by launchctl. In older versions of OSX, you edited /etc/launchd.conf

Eclipse seems to ignore these variables though. The eclipse.ini trick works.


Summary

Table of Contents

Java on OSX is a bit of a mess.


Resources

Table of Contents

Posted in Java | Leave a comment

Multiselect UITableView with limited selections

Swift Language

Multiselect UITableView with limited selections

Simple example of creating a multi-select UITableView, but allow only a limited number of selected cells.


Introduction

Table of Contents

I wanted a multi-select table view but limit the number of selected cells. The “answers” on StackOverflow were quite awful – which is true > 50% of the time.

So here is a simple working example


How to

Table of Contents

Add a table view to your storyboard. (I know, well duh). In the attributes editor choose multi select.
Set your viewcontroller to be the table datasource and delegate. You know, the usual.

In your view controller, do the usual datasource setup.

For the limiting functionality, you need to do this in the delegate.

In my viewcontroller, I added a variable – the limit. Then in the delegate’s willSelectRowAtIndexPath func I compared the current number of selected cells (tableView.indexPathsForSelectedRows) to the limit returning nil if it’s over the limit.

That’s it. None of the contortions you see on SO.

In the other delegate methods, I just add eye candy.

Limited TableView Selection

And when you go over the limit.

Limited TableView Selection Alert


Summary

Table of Contents

All you really need to do is done in the delegate’s willSelectRowAtIndexPath func.


Resources

Table of Contents

Posted in Swift | Leave a comment

tvOS Framework Target bug

tvos

There be bugs here

Getting past the simple project stage with your tvOS app?

Maybe you’re putting reusable/common code in frameworks and then linking to them from your tv project.

What could go wrong?

Introduction

 

Table of Contents

So I created an Xcode workspace. Then under Framework & Library, I created and added a TV Framework project. I created a class in the framework then built it.

Then I created a TV app. In this project I set “Linked Frameworks and Libraries” to use my framework. I also added the framework to “Embedded Binaries”. If you don’t do this, you will get a link failure when you run on the actual device.

framework-01

In the app I imported the framework and referenced the (public of course) framework class.

It works!

Groovy.

Here’s the Project Navigator. Total grooviosity.

framework-02

The Problem

 

Table of Contents

So, what am I complaining about?

I have another pile of code that works on iOS and Cocoa. In this project (a “Cocoa Touch Framework”, I simply have two targets – one for iOS and Cocoa. In the code I use #if os(iOS) or os(OSX). You can check to tvOS too.

Like this:

I’m sure you’ve done that too. That works just fine.

So, what’s the problem?

I’m getting there.

In your multi-target project, add a TV framework target. If it works for Cocoa and iOS, it should work for tvOS too, right?

Well, sort of. It works in the simulator but not on the device.

Here is the same setup as above in the app.
framework-04

This is what the Project Navigator looks like:

framework-03

Lots of red.

Let’s see what we get when you select the framework in the app.
framework-05

Huh.

That’s not where it should be. If you look at the other framework, it’s in the Derived Data directory. Actually, that’s where it is (i.e. when I look in the derived data directory, it’s there), but Xcode thinks it’s in the project directory. (btw., the Embedded Binaries section shows the same wrong path)

Would you know how to fix this?

Summary

 

Table of Contents

TV framework targets added to a project don’t link correctly in your app when run on the device.

Resources

 

Table of Contents

Attachments

Posted in tvOS | Leave a comment

tvOS: playing audio

tvos

Playing MIDI on tvOS

According to Apple’s App Programming Guide for tvOS, AVFoundation is supported in tvOS.

Groovy!

Let’s use AVMIDIPlayer to play a file!

Set up

Table of Contents

AVMIDIPlayer’s init func wants two URLs: one to a MIDI file and another to a SoundFont or DLS file.

I created a standard MIDI file in Sibelius. The SoundFont I used is the one distributed with MusesCore.

Here is how I created the player.

And then I played it by calling the cunningly named play() function. I use a completion handler to reset the playback position.

This is pretty much what I do in iOS and Cocoa and it works.


The result

Table of Contents

In the tvOS simulator, it works almost OK. You hear sine waves – just like in iOS. When you run it on an actual device, you should hear the instruments.

In tvOS, not so much. I hear two notes with a loaded instrument, and then blammo.
Crash.

Removing the dispatch_async call yields the same result.

Here is the love letter from the debugger.

By the way, I also tried MusicPlayer and Core Audio since the guide says AudioToolbox and AudioUnit are supported too. Same result. The stack trace shows that error occurs deep in the bowels of AudioToolbox.


Summary

Table of Contents

Well, crap. It would be nice to be able to play MIDI files; I’d like to port my ear training app to tvOS. But without playback, it would be reaaalllly hard to recognize and identify the intervals, chords, and scales playing.


Update

Table of Contents

I spent far too much time on this. I looked at my code over and over again. I was certain that I was doing it correctly (but there is always that doubt every developer deals with), so why was it crashing?

It wasn’t my code.

It was the MusesScore sound font. Out of desperation, I tried another sound font and the sun started shining , the birds sang, and my cat’s litter box was magically clean. Oh, and it played on the AppleTV.


Resources

Table of Contents

App Programming Guide for tvOS

AVFoundation

MusesCore SoundFont page

Github project

If you need a cable to connect your MacBook to your AppleTV, I bought this one.

Posted in MIDI, tvOS | Tagged , | 2 Responses

Apple TV tvOS hello world app in Swift

Swift Language

Apple TV TVML tvOS hello world app in Swift

Introduction


Table of Contents
So, you saw the shiny new AppleTV demo on the Apple Live Event. Finally we can write apps for the beast! Like most of you, I downloaded the Xcode 7.1 beta to jump in. Hey, there’s a project template! Let’s try that. Oh, that’s it? Looks like any other iOS app. Where’ the TV code? Sigh. I guess I’ll have to RTFM.

There are two types of app. One relies on “templates” written in XML called TVML (Television markup language). The other “low level” way is to write custom apps in Swift (or objc). Here is a list of iOS APIs that did or did not make it to tvOS. (Interesting that AudioToolbox made it but CoreMIDI didn’t – and no mention of Core Audio). In this post, I’ll talk about the TVML approach.

Apple has provided us with their usual almost-adequate tvOS documentation. After you get an idea of how it works, you get a “hello world” type page named Creating a Client-Server App.

Cool. The sample code is not downloadable, so you need to scrape the code off the page and fix the problems.
Not cool.

Here’s my attempt at being a bit more helpful.

Getting started


Table of Contents

Go ahead and read Creating a Client-Server App. This will give you a good conceptual overview. I’ll give you action items below.

Server


Table of Contents

Let’s start with the server side. For development, you will need to serve JavaScript and TV Markup (TVML) files. There are various ways to do this. You can use Python, but I don’t like snakes or syntactic whitespace (both will bite you). Since you’re hip and happenin’ you probably have Node.js installed. So, let’s bask in your grooviosity and use Node to serve your files.

You will need to install http-server via npm.

Easy.

Then to serve the files in a subdirectory named “public” you simply type

Or if your files are in a different directory

So, what’s in that directory?

Go ahead and create a tvOS “Single View Application” project. Now drop to a terminal and create a directory named public under your project. Then import that directory to your project. Wouldn’t it be nice if you could do this directly in Xcode?

Now, you need two files (to start) in this directory. First, the JavaScript. I created a file named tv.js. Name it what you’d like. Here is Apple’s code:

Don’t bother copying it, I’ll point you to a Github repo later.

The other file you need is a TVML file. Note that in the onLaunch function I referenced a file named yo.tvml. Go ahead and create it in the public directory.

This is Apple’s minimal example. In my next blog post, I’ll go into more details, but if you want more now, read about Templates.

In the terminal, start the http-server. In another terminal (or tab if you use iTerm) you can test it with curl. Or use your browser.

Yay. You’re serving if you see the file’s contents.

Swift


Table of Contents

Make these modifications to your app delegate. This is how your app finds the JavaScript you’re serving. If you named your JavaScript file differently, modify the name here.

That’s it for your Swift code.

Security


Table of Contents

If you run your app now, it will crash with a security problem. Apple says read the
App Transport Security Technote.

Here’s the tl;dr.

Open your Info.plist as source code and add this key.

While you’re in Info.plist, delete the storyboard reference. You won’t be using a storyboard. You can delete the ViewController.swift file too (as specified in Apple’s documentation).

Run it now.

That complicated TVML file we served looks like this:

hello tvOS

Summary

 

Table of Contents

tvOS TVML apps use a client-server architecture. You need to serve a JavaScript file and TVML files from a web server. Your Swift code will reference this JavaScript file. Most of your UI will be written in TVML.

Update


Table of Contents

Since I wrote this, Apple has published some sample code that you can download as a project. Yay.

Here is their TVML example. As usual it’s a dog’s dinner rather than a tutorial.

Resources


Table of Contents

Posted in Apple, Swift, tvOS | 2 Responses