Swift fail: MIDIClientCreate

Swift Language

Swift fail: MIDIClientCreate

There is a problem with calling Core MIDI’s MIDIClientCreate function from Swift.

Introduction

Let’s start with a simple call to Core MIDI’s client create function. You need the midi client to create MIDI input and output ports.

func midi() {
    var status = OSStatus(noErr)
    var s:CFString = "MyClient"
 
    var client = MIDIClientRef()
    status = MIDIClientCreate(s,
        MIDINotifyProc( COpaquePointer( [ MyMIDINotifyProc ] ) ),
        nil,
        &client)
    if status == OSStatus(noErr) {
        println("created client")
    } else {
        println("error creating client : \(status)")
    }
// etc
}
 
func MyMIDINotifyProc (np:UnsafePointer<MIDINotification>, refCon:UnsafeMutablePointer<Void>) {
        var notification = np.memory
        println("MIDI Notify, messageId= \(notification.messageID)")
//etc
}

Works great!

Table of Contents

Problem

So, what’s the problem?

The above code compiled just fine when the scheme was an iPhone 6. I then plugged in my iPhone 4s and the problem raised its ugly head. If you don’t have an older iOS device, just select the scheme in XCode.

To verify that this was the problem I tried checking the arch and then calling separate init methods. The initial code for both was what you see in the first example here.

 
// The iPhone 4S has a 32 bit 1 GHz dual-core Apple A5 processor and 512 MB of RAM
// The iPhone 5S has a 64 bit 1.3 GHz dual-core Apple A7 processor and 1 GB of RAM
#if arch(arm64) || arch(x86_64) // >= iPhone 5
    init64()
#else // < iPhone 5
    init32()
#endif

XCode will give you this love letter for 32 bit devices. This refers to the line where you create the client variable. (var client = MIDIClientRef())

'MIDIClientRef' cannot be constructed because it has no accessible initializers

Ok, just do this then.

var client:MIDIClientRef

Nope.

'MIDIClientRef' is not identical to 'Unmanaged?'

Ok, then

var client : Unmanaged<MIDIClientRef>? = nil

Works!

Go back to 64 bits.
Problem.

Type 'MIDIClientRef' does not conform to protocol 'AnyObject'

[expletive deleted]

Here are the definitions in CoreMIDI/MIDIServices.h

typealias MIDIObjectRef = UnsafeMutablePointer<Void>
typealias MIDIClientRef = MIDIObjectRef

Well, actually in Objective-C:

#if __LP64__
 
typedef UInt32 MIDIObjectRef;
typedef MIDIObjectRef MIDIClientRef;
typedef MIDIObjectRef MIDIPortRef;
typedef MIDIObjectRef MIDIDeviceRef;
typedef MIDIObjectRef MIDIEntityRef;
typedef MIDIObjectRef MIDIEndpointRef;
 
#else
 
typedef void * MIDIObjectRef;
typedef struct OpaqueMIDIClient *		MIDIClientRef;
typedef struct OpaqueMIDIPort *			MIDIPortRef;
typedef struct OpaqueMIDIDevice *		MIDIDeviceRef;
typedef struct OpaqueMIDIEntity *		MIDIEntityRef;
typedef struct OpaqueMIDIEndpoint *		MIDIEndpointRef;
#endif

Suggestions?

Table of Contents

Summary

You can’t create a MIDI client on older iOS devices using Swift.
If you have a solution, I’d love to hear it!

In the meantime, I’ll create the Core MIDI code (i.e. creating the client and ports) in Objective-C and call that from my Swift code.

Resources

Posted in Apple, Core MIDI, MIDI, Swift | Leave a comment

Book Review: iOS 8 for Programmers: An App-Driven Approach with Swift

There is now a tidal wave of books being released on Apple’s new Swift programming language. Here, I’m going to review iOS 8 for Programmers: An App-Driven Approach with Swift (3rd Edition) (Deitel Developer Series) which was just released. For once they did not hire Yoda to write their book title as they did with Java How To Program. But they did work a colon into the title.

I have a hardcopy of the book, so I cannot speak about the quality of the ebook versions. The same content of course, but I know from producing my own epub books that the formatting can be tedious and error prone.

Readers can download a zip of the code examples from the Deitel website. Unfortunately, you have to “register” on their site to get the download link as if we are still living in 1995.

First off, the audience for the book. It is aimed at experienced programmers, especially those with experience in an object oriented language. If you are just starting out, this is probably not the book for you. If that is the case, I’d suggest Swift for Absolute Beginners which is another brand new book.

As the title suggests, this is not a Swift tutorial. Instead, you are introduced to Swift’s features by writing several toy apps. That’s what “app-driven approach” means. I really hate books and course materials that are simple laundry lists of features. In fact, over 90% of the live courses I’ve taught over the past 25 years ignored the printed course materials (unless it was one I authored :)). Laundry lists are easy on the author but hard on the learner. This app-driven approach gets closer to enabling real learning. If the learner has a question in their head while working through the material, and then see the answer a few pages later, that is excellent. Motivational seeding is what I call that. So, you will get a decent foundation in Swift, but you will not see any advanced topics. The things that I’ve banged my head against the wall with, such as interfacing with legacy APIs such are Core Audio or Core MIDI, are not touched upon. I don’t mean those APIs in particular, but interfacing with any of the legacy APIs. As is common with most iOS development books, unit testing is not covered.

The Apps

These are the Apps that the learner will build:

  • Welcome App
  • Tip Calculator App
  • Twitter Searches App
  • Flag Quiz App
  • Cannon Game App
  • Doodlz App
  • Address Book App

Each App introduces a new iOS and/or Swift feature. For example, the Cannon Game touches on Sprite Kit and the Address Book uses Core Data.

I like the format of each chapter. Each begins with a list of objectives followed by an outline. The page header for the page on the right will be an outline title. I wonder if the ebook formats the outline items as links. This seems to be a small thing, but after you’ve gone through a book, you might need to find something. This helps a lot. It also sets your expectations for what is going to be accomplished in the chapter. Not surprising, the end of each chapter has a “wrap up” telling you what they just told you. Also useful for answering “In what chapter was that thing on X covered?”

Sometimes, the author is a bit lazy. For example, section 4.3.13 talks about external parameter names. The paradigm is given but no code example. Thanks for the Amo, Amas, Amat, but where is the example sentence? Amo libri huius? Also, the Alert controller code on page 148 has a memory leak when you access the text fields in that manner. The Twitter app sidesteps Twitter’s RESTful API and uses a WebView instead. I guess NSURLSession would be too complicated or having to authenticate would be too much trouble.

There are a decent number of technologies touched upon. iCloud, Sprite Kit, Social Framework, Core Data, etc.

The book ends with a chapter on the business end and the App Store. Most developers will tell you that the coding is easier than getting it onto the App Store. Useful information is provided here.

Summary

If you are an experienced programmer, this is a good book to get to get a decent foundation in iOS development and the Swift language.
The softcover book is around 40 bucks.

You can get more information on the InformIT site.

Posted in Book Review, Swift | Tagged , | Leave a comment

iOS 8 Bluetooth MIDI LE build tip

Swift Language

iOS Bluetooth MIDI LE

Introduction

iOS 8 and OS X Yosemite now supports sending and receiving MIDI data using Bluetooth Low Energy connections on any iOS device or Mac that has native Bluetooth Low Energy support.

I reminding myself here of a simple problem I had that wasted my time.

Table of Contents

The Bluetooth classes

So, I’m playing around with the new Bluetooth LE MIDI capabilities.
Im my build settings I include the CoreAudioKit framework in order to get
the new Core Audio Bluetooth MIDI (CABTMIDI) controllers CABTMIDILocalPeripheralViewController and CABTMIDICentralViewController.

You also get the Inter-App audio classes CAInterAppAudioSwitcherView and CAInterAppAudioTransportView with CoreAudioKit, but I’m not using them here.

Here is a very simple view controller example.

import UIKit
import CoreAudioKit 
import CoreMIDI
 
class ViewController: UIViewController {
 
    var localPeripheralViewController:CABTMIDILocalPeripheralViewController?
    var centralViewController:CABTMIDICentralViewController?
 
    override func viewDidLoad() {
        super.viewDidLoad()
        localPeripheralViewController = CABTMIDILocalPeripheralViewController()
        centralViewController = CABTMIDICentralViewController()
    }
 
    @IBAction func someAction(sender: AnyObject) {
        self.navigationController?.pushViewController(localPeripheralViewController!, animated: true)
    }
 
    @IBAction func midiCentral(sender: AnyObject) {
         self.navigationController?.pushViewController(centralViewController!, animated: true)
    }
}

I played around with it, then had to do other work. I came back to it a week later, and it wouldn’t even compile. I didn’t change anything (e.g. no XCode updates). Yes, the CoreAudioKit is indeed included, but the error was one the “import CoreAudioKit”. The compiler didn’t know what that was even though the framework is there and I can even see the headers in the XCode UI tree under CoreAudioKit.framework.

It turns out that the build scheme needs to have a device selected, and not any of the simulator choices. Even if you are building and not running. The device does not need to be attached. You can just choose the first item: iOS Device. Then it will build.

D’Oh!

Apple even says so in a tech note (that I did not know existed). See the resources below.

Table of Contents

Summary

Bluetooth LE MIDI support will build only if a device is selected.

Resources

Apple Tech note

Apple Tech note” target=”_blank”>WWDC 2014 session 501 – What’s new in Core Audio

Posted in Apple, Computer Music, MIDI, Swift | Tagged , , | Leave a comment

BASH script to create a git bare repo

gnu
I can’t count how many times I’ve created a project in my IDE then dropped to the terminal to create a bare git repository, then add that as a remote to my project. And also add/commit/push. So, I decided to make my life a bit easier by writing a small shell script to do all this nonsense. You might find this useful as is or for parts you can copy and paste.

BTW., If I’m the only one working on a project right now, I find that creating the bare repo on Dropbox is handy. This won’t work of course if multiple developers are pushing at the same time.

Posted in Uncategorized | Tagged , , , | Leave a comment

Unit testing async network calls in Swift

Swift Language

Asynchronous unit testing in Swift

You have probably written code with a NSURLSessionDataTask that notifies a delegate when the data is received. How do you write a unit test for that?

Introduction

Let’s stub out some typical code. Here a an API function that takes perhaps a REST endpoint and a delegate that receives a Thing instance. I use a NSURLSessionDataTask because I’m expecting, well, data (as JSON). I’m not showing the gory details of parsing the JSON since that’s not my point here. BTW., it’s not very difficult to parse. The idea is that a Thing is instantiated and the delegate is notified.

func getThing(url:String, delegate:ThingDelegate) {
//set up NSURLSession and request...
let task : NSURLSessionDataTask = session.dataTaskWithRequest(request, completionHandler: {(data, response, error) in
   if let e = error {
      println("Error: \(e.localizedDescription)")
   }
 
   var jsonError:NSError?
   if let json = NSJSONSerialization.JSONObjectWithData(data, options: nil, error: &jsonError) as? NSDictionary {
      if let e = jsonError {
         println("Error parsing json: \(e.localizedDescription)")
      } else {
         parse the JSON to instantiate a thing...
         delegate.didReceiveWhatever(thing)

Table of Contents

Testing

So, how do you write a unit test for this kind of code? The API call does not return anything to pass into XCTAssertTrue or siblings. Wouldn’t it be nice if you can make the network API call and wait – with a timeout of course – for a response?

Previously, you’d have to use semaphores, a spin loop, or something similar. Since this is such a common scenario, Apple gave us XCTestExpectation in XCode 6. (Actually, it’s a category in XCTextCase+AsynchronousTesting.)

Here is a simple usage example. I have an instance variable of type XCTestExpectation because I need it in the delegate callback in addition to the test function. I simply instantiate it, make the network call, then call one of the new wait functions. In this case waitForExpectationsWithTimeout. When the delegate is notified, I fulfill the expectation. If you don’t, the test will fail after the timeout.

 var expectation:XCTestExpectation?
 
    func testExample() {
 
        expectation = self.expectationWithDescription("asynchronous request")
 
        Networkclass.getThing("http://api.things.com/someid", delegate: self)
 
        self.waitForExpectationsWithTimeout(10.0, handler:nil)
 
    }
    func didReceiveWhatever(thing:Thing) {
        expectation?.fulfill()
    }

Table of Contents

Summary

Simple huh? Take a look at the documentation for a few variations.

Resources

Posted in iOS, Swift | Tagged , , | Leave a comment

Swift and AVMIDIPlayer

Swift Language

Swift and AVMIDIPlayer

How to play MIDI data via the AVFoundation AVMIDIPlayer.

Introduction

Previously, I wrote about attaching a low level core audio AUGraph to a MusicSequence to hear something besides sine waves when played via a MusicPlayer. Here, I’ll show you how to use the new higher level AVMIDIPlayer. You can even play a MusicSequence by sticking your elbow in your ear.

Playing a MIDI File

Preparing an AVMIDIPlayer to play a standard MIDI file with a SoundFont or DLS file is fairly straightforward. Get both NSURLs from your bundle, then pass them into the init function.

if let contents = NSBundle.mainBundle().URLForResource(gMajor, withExtension: "mid") {
    self.soundbank = NSBundle.mainBundle().URLForResource(soundFontMuseCoreName, withExtension: "sf2")
    if self.soundbank != nil {
        var error:NSError?
        self.mp = AVMIDIPlayer(contentsOfURL: contents, soundBankURL: soundbank!, error: &error)
        if(self.mp != nil) {
            mp!.prepareToPlay()
            setupSlider()
            // crashes if you set a completion handler
            mp!.play(nil)
        } else {
            if let e = error {
                println("Error \(e.localizedDescription)")
            }
        }
   }
}

Note that I’m passing nil to the play function. It expects a completion function. It will crash if you pass in either a function or a closure. My workaround is to pass nil.

var completion:AVMIDIPlayerCompletionHandler = {
    println("done")
}
mp!.play(completion)
 
// or even a function
func comp() -> Void {
}
mp!.play(comp)

Play your MIDI file in the simulator, and you’ll hear sine waves. Huh? A valid SoundFont was sent to the init function, and you hear sine waves? Yeah. After you spend a day verifying that your code is correct, install iOS8 on your actual device and try it there. Yup, it works. Nice.

ps. that slider thing is just some eye candy in the final project. A UISlider moves while playing.

Table of Contents

Playing NSData from a file

AVMIDIPlayer has an init function that takes an NSData instance instead of a URL. So, let’s try creating an NSData object from the URL and a simple first step.

if let contents = NSBundle.mainBundle().URLForResource(nightBaldMountain, withExtension: "mid") {
    self.soundbank = NSBundle.mainBundle().URLForResource(soundFontMuseCoreName, withExtension: "sf2")
    if self.soundbank != nil {
        var data = NSData(contentsOfURL: contents)
        var error:NSError?
        self.mp = AVMIDIPlayer(data:data, soundBankURL: soundbank!, error: &error)
        if(self.mp != nil) {
            mp!.prepareToPlay()
            setupSlider()
            mp!.play(nil)
        } else {
            if let e = error {
                println("Error \(e.localizedDescription)")
            }
        }
    }
}

Not surprisingly, that works. But why would you want to do this?

Table of Contents

Playing a MusicSequence

The hoary grizzled MusicSequence from the AudioToolbox is still the only way to create a MIDI Sequence on the fly. If you have an app where the user taps in notes, you can store them in a MusicSequence for example. But AVMIDIPlayer has no init function that takes a MusicSequence. Our choices are an NSURL or NSData.

A NSURL doesn’t make sense, but what about NSData? Can you turn a MusicSequence into NSData? Well, there’s MusicSequenceFileCreateData(). With this function, you can pass in a data variable that will be initialized to the data that would be written to a standard MIDI file. You can then use that NSData in the player code in our previous example.

func seqToData(musicSequence:MusicSequence) -> NSData {
    var status = OSStatus(noErr)
    var data:Unmanaged<CFData>?
    status = MusicSequenceFileCreateData(musicSequence,
        MusicSequenceFileTypeID(kMusicSequenceFile_MIDIType),
        MusicSequenceFileFlags(kMusicSequenceFileFlags_EraseFile),
       480, // resolution
       &data)
    return data!.takeUnretainedValue()
}

I haven’t checked to see if there is a memory leak with the takeUnretainedValue call. I’ll check that out next.

update: I checked and there is indeed a small memory leak.
The docs for MusicSequenceFileCreateData say that the caller is responsible for releasing the CFData. So OK, takeUnretainedValue is the right one. So I tried saving the data variable as an ivar, checking for nil when playing again, then calling release(). Crash. What about DisposeMusicSequence? OK, I tried saving the sequence as an ivar and calling that. No crash, but memory still leaks. CFRelease is simply unavailable.

What do you think? Advice?

Table of Contents

Summary

So you can play a MusicSequence with sounds via an AVMIDIPlayer. You just need to know the secret handshake.

Resources

Posted in Computer Music, Swift | Tagged , | 3 Responses

Swift: AUGraph and MusicSequence

Swift Language

Swift AUGraph and MusicSequence

The AudioToolbox MusicSequence remains the only way to create a MIDI Sequence programmatically. The AVFoundation class AVMIDIPlayer will play a MIDI file, but not a MusicSequence.

AVAudioEngine has a musicSequence property. It doesn’t seem to do anything yet (except crash when you set it). So the way to get a MusicSequence to play with instrument sounds is to create a low level core audio AUGraph and play the sequence with a MusicPlayer.

Introduction

Apple is moving towards a higher level Audio API with AVFoundation. The AVAudioEngine looks promising, but it is incomplete. Right now there isn’t a way to associate an AudioToolbox MusicSequence with it. So, here I’ll use a low level Core Audio AUGraph for the sounds.

Table of Contents

Create a MusicSequence

Let’s start by creating a MusicSequence with a MusicTrack that contains several MIDINoteMessages.

var musicSequence:MusicSequence = MusicSequence()
var status = NewMusicSequence(&musicSequence)
if status != OSStatus(noErr) {
    println("\(__LINE__) bad status \(status) creating sequence")
}
 
// add a track
var track:MusicTrack = MusicTrack()
status = MusicSequenceNewTrack(musicSequence, &track)
if status != OSStatus(noErr) {
    println("error creating track \(status)")
}
 
// now make some notes and put them on the track
var beat:MusicTimeStamp = 1.0
for i:UInt8 in 60...72 {
    var mess = MIDINoteMessage(channel: 0,
        note: i,
        velocity: 64,
        releaseVelocity: 0,
        duration: 1.0 )
    status = MusicTrackNewMIDINoteEvent(track, beat, &mess)
    if status != OSStatus(noErr) {
        println("error creating midi note event \(status)")
    }
     beat++
}

Table of Contents

MusicPlayer create

Now you need a MusicPlayer to hear it. Let’s make one give it out MusicSequence.
Here, I “pre roll” the player for fast startup when you hit a play button. You don’t have to do this,
but here is the way to do it.

var musicPlayer:MusicPlayer = MusicPlayer()
var status = NewMusicPlayer(&musicPlayer)
if status != OSStatus(noErr) {
    println("bad status \(status) creating player")
}
status = MusicPlayerSetSequence(musicPlayer, musicSequence)
if status != OSStatus(noErr) {
    println("setting sequence \(status)")
}
status = MusicPlayerPreroll(musicPlayer)
if status != OSStatus(noErr) {
    println("prerolling player \(status)")
}

Table of Contents

Playing a MusicSequence

Finally, you tell the player to play like this – probably from an IBAction.

status = MusicPlayerStart(musicPlayer)
if status != OSStatus(noErr) {
    println("Error starting \(status)")
    return
}

Wonderful sine waves! What if you want to hear something that approximates actual instruments?

Well, you can load SoundFont or DLS banks – or even individual sound files. Here, I’ll load a SoundFont.
Load it into what? Well, here I’ll load it into a core audio sampler – an AudioUnit. That means I’ll need to create a core audio AUGraph.

The end of the story is this, you associate an AUGraph with the MusicSequence like this.

MusicSequenceSetAUGraph(musicSequence, self.processingGraph)

Table of Contents

Create an AUGraph

Great. So how do you make an AUGraph? If you want a bit more detail, look at my blog post on it using Objective-C. Here, I’ll just outline the steps.

Create the AUGraph with NewAUGraph. It is useful to define it as an instance variable.

var processingGraph:AUGraph
 
var status = NewAUGraph(&self.processingGraph)

Table of Contents

Create sampler

To create the sampler and add it to the graph, you need to create an AudioComponentDescription.

var samplerNode:AUNode
 
var cd:AudioComponentDescription = AudioComponentDescription(
    componentType: OSType(kAudioUnitType_MusicDevice),
    componentSubType: OSType(kAudioUnitSubType_Sampler),
    componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
    componentFlags: 0,
    componentFlagsMask: 0)
status = AUGraphAddNode(self.processingGraph, &cd, &samplerNode)

Table of Contents

Create IO node

Create an output node in the same manner.

var ioUnitDescription:AudioComponentDescription = AudioComponentDescription(
    componentType: OSType(kAudioUnitType_Output),
    componentSubType: OSType(kAudioUnitSubType_RemoteIO),
    componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
    componentFlags: 0,
    componentFlagsMask: 0)
status = AUGraphAddNode(self.processingGraph, &ioUnitDescription, &ioNode)

Table of Contents

Obtain Audio Units

Now to wire the nodes together and init the AudioUnits. The graph needs to be open, so we do that first.
Then I obtain references to the audio units with the function AUGraphNodeInfo.

var samplerUnit:AudioUnit
var ioUnit:AudioUnit
 
status = AUGraphOpen(self.processingGraph)
 
status = AUGraphNodeInfo(self.processingGraph, self.samplerNode, nil, &samplerUnit)
 
status = AUGraphNodeInfo(self.processingGraph, self.ioNode, nil, &ioUnit)

Table of Contents

Wiring

Now wire them using AUGraphConnectNodeInput.

var ioUnitOutputElement:AudioUnitElement = 0
var samplerOutputElement:AudioUnitElement = 0
status = AUGraphConnectNodeInput(self.processingGraph,
    self.samplerNode, samplerOutputElement, // srcnode, inSourceOutputNumber
    self.ioNode, ioUnitOutputElement) // destnode, inDestInputNumber

Table of Contents

Starting the AUGraph

Now you can initialize and start the graph.

var status : OSStatus = OSStatus(noErr)
var outIsInitialized:Boolean = 0
status = AUGraphIsInitialized(self.processingGraph, &outIsInitialized)
if outIsInitialized == 0 {
    status = AUGraphInitialize(self.processingGraph)
}
 
var isRunning:Boolean = 0
AUGraphIsRunning(self.processingGraph, &isRunning)
if isRunning == 0 {
    status = AUGraphStart(self.processingGraph)
}

Table of Contents

Soundfont

Go ahead and play your MusicSequence now. Crap. Sine waves again. Well yeah, we didn’t load any sounds!

Let’s create a function to load a SoundFont, then use a “preset” from that font on the sampler unit. You need to fill out a AUSamplerInstrumentData struct. One thing that may trip you up is the fileURL which is an Unmanaged CFURL. Well, NSURL is automatically toll-free-bridged to CFURL. Cool. But it is not Unmanaged, which is what is required. So, here I’m using Unmanaged.passUnretained. If you know a better way, please let me know.

Then we need to set the kAUSamplerProperty_LoadInstrument on our samplerUnit. You do that with AudioUnitSetProperty. The preset numbers are General MIDI patch numbers. In the Github repo, I created a Dictionary of patches for ease of use and an example Picker.

func loadSF2Preset(preset:UInt8)  {
    if let bankURL = NSBundle.mainBundle().URLForResource("GeneralUser GS MuseScore v1.442", withExtension: "sf2") {
        var instdata = AUSamplerInstrumentData(fileURL: Unmanaged.passUnretained(bankURL),
                instrumentType: UInt8(kInstrumentType_DLSPreset),
                bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB),
                bankLSB: UInt8(kAUSampler_DefaultBankLSB),
                presetID: preset)
 
        var status = AudioUnitSetProperty(
                self.samplerUnit,
                UInt32(kAUSamplerProperty_LoadInstrument),
                UInt32(kAudioUnitScope_Global),
                0,
                &instdata,
                UInt32(sizeof(AUSamplerInstrumentData)))
            CheckError(status)
        }
}

Table of Contents

Summary

You can create a Core Audio AUGraph, attach it to a MusicSequence, and play it.

Resources

Posted in Computer Music, Swift | Tagged , , | 5 Responses

Swift: remove array item

Swift Language

Swift Array item removal without an index

The surprising contortions that you need to go through in order to remove an item from an array in Swift if you do not have its index in the array.

Introduction

I’m writing an app that uses standard music notation for input. Imagine a view with a staff and a tap inputs a note. Each “note view” represents a note model object. Then you decide that you do not want that note, so you need to delete it. You can get the note by pressing on it. Then that note needs to be deleted from a “notes array”.

So, you have the note, but not its index. If you had the index, Swift gives you zero trouble to remove it from the array.

notes.removeAtIndex(2)

But you don’t have the index. You have the item in the array. Well just use “indexOf”, right? Sure. Where is that? I couldn’t find anything like that. Let me know if you know of one.

What I ended up doing is removing the note by filtering the array. Here is a simple filter that removes the item.

var notes:[MIDINote] = []
 
func removeNote(note:MIDINote) {
   self.notes = self.notes.filter( {$0 != note} )
}

One problem. I’m using a comparison operator. My class didn’t have one.

Table of Contents

Comparing objects

For that != operator to work, you need to implement the Equatable protocol. There is one requirement for this protocol: you provide an overload for the == operator at global scope. “Global scope” means outside of the class. When you overload the == operator, != will work too.

Like this:

func == (lhs: MIDINote, rhs: MIDINote) -> Bool {
    if lhs.pitch.midiNumber == rhs.pitch.midiNumber &&
        lhs.duration == rhs.duration &&
        lhs.channel == rhs.channel &&
        lhs.startBeat == rhs.startBeat {
            return true
    }
    return false
}
 
class MIDINote : Equatable {
   var duration = 0.0
   var channel = 0
   var startBeat = 1.0
etc.

Table of Contents

Summary

You can remove an item from an array by writing a filter closure. But, your item must implement the Equatable protocol.
If there is a simpler way to remove an item from an array without having its index, please let me know.

Update

Many people here and in the twitterverse have kindly pointed out there there is indeed an indexOf function. But it is not named anything close to that – it is the find(array, item) function.

<soapbox>
There is a lesson in this for API writers on naming. IMHO, it is poorly named. (Is there any ambiguity in the name “indexOf”? What are the chances that a polyglot programmer would seek a method/function named indexOf vs find?). I wonder how many people are going to have indexOf in an Array extension?
</soapbox>

My other problem was finding find. In neither the Array documentation nor the Collection documentation do I see this function. Is it unreasonable for me to be looking there?
Note that filter is defined as a global function and as an Array function.

Anyway, the actual definition is this:

func find<C : CollectionType where C.Generator.Element : Equatable>(domain: C, value: C.Generator.Element) -> C.Index?

Note that it is not array specific. You can do this with other Sequences.

So, the non filter version is this:

if let index = find(self.notes, note) {
   self.notes.removeAtIndex(index)
}

I haven’t yet looked to see which is more performant. My guess is the filter version. (but not if it were a linked list).

Again, thanks for the tip.

ps
to see the undocumented functions in Swift, do this:
Put this in your code:

import Swift

Then Command-Click on Swift

Resources

Posted in Swift | Tagged | 6 Responses

Swift dragging a UIView with snap

Swift Language

Swift dragging a UIView

Here is one simple way to drag a UIView.

Introduction

There are many ways to drag a UIView around. In my example on Github, I drag a custom UIView subclass that does nothing special besides drawing itself. In real life you’d probably have additional code in it. (One hint at that is in my example view, I fill the rectangle with a color instead of simply setting its backgroundColor property).

I could have put the event handling in the custom UIView.
Something like this:

override func touchesBegan(touches: NSSet!, withEvent event: UIEvent!) {
etc

Nah. Here I’m going to use a UIPanGestureRecognizer. In the ViewController, I’ll install the recognizer on the “parent” view.

var pan = UIPanGestureRecognizer(target:self, action:"pan:")
pan.maximumNumberOfTouches = 1
pan.minimumNumberOfTouches = 1
self.view.addGestureRecognizer(pan)

Table of Contents

Beginning the drag

In the recognizer action, I first grab the location of the event. Then, depending on the state of the recognizer, I implement the different parts of the drag functionality.

First, you need to get the subview that you clicked upon. I do this in the .Began state.

func pan(rec:UIPanGestureRecognizer) {
 
        var p:CGPoint = rec.locationInView(self.view)
        var center:CGPoint = CGPointZero
 
        switch rec.state {
        case .Began:
            println("began")
            self.selectedView = view.hitTest(p, withEvent: nil)
            if self.selectedView != nil {
                self.view.bringSubviewToFront(self.selectedView!)
            }
etc.

Table of Contents

Dragging

The actual dragging takes place in the .Changed state. If there is a view selected, I store it’s center property. Then, I calculate how far the touch moved. You can use this as a threshold. Since I want to be able to configure whether the view can be dragged in the x or y direction (or both), I use two instance variables shouldDragX and shouldDragY. If these are true I set the center property of the selected view to the new location. This location has been “snapped” by a snap value. For example, if snapX is 25.0, the view will be dragged only in increments of 25.0.

case .Changed:
            if let subview = selectedView {
                center = subview.center
                var distance = sqrt(pow((center.x - p.x), 2.0) + pow((center.y - p.y), 2.0))
                println("distance \(distance)")
 
                if subview is MyView {
                    if distance > threshold {
                        if shouldDragX {
                            subview.center.x = p.x - (p.x % snapX)
                        }
                        if shouldDragY {
                            subview.center.y = p.y - (p.y % snapY)
                        }
                    }
                }
            }
etc.

Table of Contents

Ending the drag

Then, in the .Ended state, I set the selectedView to nil to start over. You can also do whatever processing you need here.

case .Ended:
   if let subview = selectedView {
      if subview is MyView {
          // do whatever
      }
   }
   // must do this of course
   selectedView = nil

Table of Contents

Summary

Install a UIPanGestureRecognizer on a parent view to drag a subview. It goes without saying that the parent view should do no layout on its subviews, nor should they have any constraints.

This is not “drag and drop”, because there is no data transfer. You are simply rearranging the location of a UIView.

Resources

Posted in Swift | Tagged , | 1 Response

AVFoundation audio recording with Swift

Swift Language

Swift AVFoundation Recorder

Use AVFoundation to create an audio recording.

Introduction

AVFoundation makes audio recording a lot simpler than recording using Core Audio. Essentially, you simply configure and create an AVAudioRecorder instance, and tell it to record/stop in actions.

Creating a Recorder

The first thing you need to do when creating a recorder is to specify the audio format that the recorder will use. This is a Dictionary of settings. For the AVFormatIDKey there are several
predefined Core Audio data format identifiers such as kAudioFormatLinearPCM and kAudioFormatAC3. Here are a few settings to record in Apple Lossless format.

var recordSettings = [
   AVFormatIDKey: kAudioFormatAppleLossless,
   AVEncoderAudioQualityKey : AVAudioQuality.Max.toRaw(),
   AVEncoderBitRateKey : 320000,
   AVNumberOfChannelsKey: 2,
   AVSampleRateKey : 44100.0
]

Then you create the recorder with those settings and the URL of the output sound file. If the recorder is created successfully, you can then call prepareToRecord() which will create or overwrite the sound file at the specified URL. If you’re going to write a VU meter style graph, you can tell the recorder to meter the recording. You’ll have to install a timer to periodically ask the recorder for the values. (See the github project).

var error: NSError?
self.recorder = AVAudioRecorder(URL: soundFileURL, settings: recordSettings, error: &error)
if let e = error {
   println(e.localizedDescription)
} else {
   recorder.delegate = self
   recorder.meteringEnabled = true
   recorder.prepareToRecord() // creates/overwrites the file at soundFileURL
}

Table of Contents

Recorder Delegate

I set the recorder’s delegate in order to be notified that the recorder has stopped recording. At this point you can update the UI (e.g. enable a disabled play button) and/or prompt the user to keep or discard the recording. In this example I use the new iOS 8 UIAlertController class. If the user says “delete the recording”, simply call deleteRecording() on the recorder instance.

extension RecorderViewController : AVAudioRecorderDelegate {
 
    func audioRecorderDidFinishRecording(recorder: AVAudioRecorder!,
        successfully flag: Bool) {
            println("finished recording \(flag)")
            stopButton.enabled = false
            playButton.enabled = true
            recordButton.setTitle("Record", forState:.Normal)
 
            // ios8 and later
            var alert = UIAlertController(title: "Recorder",
                message: "Finished Recording",
                preferredStyle: .Alert)
            alert.addAction(UIAlertAction(title: "Keep", style: .Default, handler: {action in
                println("keep was tapped")
            }))
            alert.addAction(UIAlertAction(title: "Delete", style: .Default, handler: {action in
                self.recorder.deleteRecording()
            }))
            self.presentViewController(alert, animated:true, completion:nil)
    }
 
    func audioRecorderEncodeErrorDidOccur(recorder: AVAudioRecorder!,
        error: NSError!) {
            println("\(error.localizedDescription)")
    }
}

Table of Contents

Recording

In order to record, you need to ask the user for permission to record first. The AVAudioSession class has a requestRecordPermission() function to which you provide a closure. If granted, you set the session’s category to AVAudioSessionCategoryPlayAndRecord, set up the recorder as described above, and install a timer if you want to check the metering levels.

AVAudioSession.sharedInstance().requestRecordPermission({(granted: Bool)-> Void in
   if granted {
      self.setSessionPlayAndRecord()
      self.setupRecorder()
      self.recorder.record()
      self.meterTimer = NSTimer.scheduledTimerWithTimeInterval(0.1,
         target:self,
         selector:"updateAudioMeter:",
         userInfo:nil,
         repeats:true)
    } else {
      println("Permission to record not granted")
    }
})

Here is a very simple function to display the metering level to stdout, as well as displaying the current recording time. Yes, string formatting is awkward in Swift. Have a better way? Let me know.

func updateAudioMeter(timer:NSTimer) {
   if recorder.recording {
      let dFormat = "%02d"
      let min:Int = Int(recorder.currentTime / 60)
      let sec:Int = Int(recorder.currentTime % 60)
      let s = "\(String(format: dFormat, min)):\(String(format: dFormat, sec))"
      statusLabel.text = s
      recorder.updateMeters()
      var apc0 = recorder.averagePowerForChannel(0)
      var peak0 = recorder.peakPowerForChannel(0)
print them out...
   }
}

Table of Contents

Summary

That’s it. You now have an audio recording that you can play back using an AVAudioPlayer instance.

Resources

Posted in Computer Music, Swift | Tagged , | 10 Responses