AVFoundation audio recording with Swift

Swift Language

Swift AVFoundation Recorder

Use AVFoundation to create an audio recording.

Introduction

AVFoundation makes audio recording a lot simpler than recording using Core Audio. Essentially, you simply configure and create an AVAudioRecorder instance, and tell it to record/stop in actions.

Creating a Recorder

The first thing you need to do when creating a recorder is to specify the audio format that the recorder will use. This is a Dictionary of settings. For the AVFormatIDKey there are several
predefined Core Audio data format identifiers such as kAudioFormatLinearPCM and kAudioFormatAC3. Here are a few settings to record in Apple Lossless format.

var recordSettings = [
   AVFormatIDKey: kAudioFormatAppleLossless,
   AVEncoderAudioQualityKey : AVAudioQuality.Max.toRaw(),
   AVEncoderBitRateKey : 320000,
   AVNumberOfChannelsKey: 2,
   AVSampleRateKey : 44100.0
]

Then you create the recorder with those settings and the URL of the output sound file. If the recorder is created successfully, you can then call prepareToRecord() which will create or overwrite the sound file at the specified URL. If you’re going to write a VU meter style graph, you can tell the recorder to meter the recording. You’ll have to install a timer to periodically ask the recorder for the values. (See the github project).

var error: NSError?
self.recorder = AVAudioRecorder(URL: soundFileURL, settings: recordSettings, error: &error)
if let e = error {
   println(e.localizedDescription)
} else {
   recorder.delegate = self
   recorder.meteringEnabled = true
   recorder.prepareToRecord() // creates/overwrites the file at soundFileURL
}

Table of Contents

Recorder Delegate

I set the recorder’s delegate in order to be notified that the recorder has stopped recording. At this point you can update the UI (e.g. enable a disabled play button) and/or prompt the user to keep or discard the recording. In this example I use the new iOS 8 UIAlertController class. If the user says “delete the recording”, simply call deleteRecording() on the recorder instance.

extension RecorderViewController : AVAudioRecorderDelegate {
 
    func audioRecorderDidFinishRecording(recorder: AVAudioRecorder!,
        successfully flag: Bool) {
            println("finished recording \(flag)")
            stopButton.enabled = false
            playButton.enabled = true
            recordButton.setTitle("Record", forState:.Normal)
 
            // ios8 and later
            var alert = UIAlertController(title: "Recorder",
                message: "Finished Recording",
                preferredStyle: .Alert)
            alert.addAction(UIAlertAction(title: "Keep", style: .Default, handler: {action in
                println("keep was tapped")
            }))
            alert.addAction(UIAlertAction(title: "Delete", style: .Default, handler: {action in
                self.recorder.deleteRecording()
            }))
            self.presentViewController(alert, animated:true, completion:nil)
    }
 
    func audioRecorderEncodeErrorDidOccur(recorder: AVAudioRecorder!,
        error: NSError!) {
            println("\(error.localizedDescription)")
    }
}

Table of Contents

Recording

In order to record, you need to ask the user for permission to record first. The AVAudioSession class has a requestRecordPermission() function to which you provide a closure. If granted, you set the session’s category to AVAudioSessionCategoryPlayAndRecord, set up the recorder as described above, and install a timer if you want to check the metering levels.

AVAudioSession.sharedInstance().requestRecordPermission({(granted: Bool)-> Void in
   if granted {
      self.setSessionPlayAndRecord()
      self.setupRecorder()
      self.recorder.record()
      self.meterTimer = NSTimer.scheduledTimerWithTimeInterval(0.1,
         target:self,
         selector:"updateAudioMeter:",
         userInfo:nil,
         repeats:true)
    } else {
      println("Permission to record not granted")
    }
})

Here is a very simple function to display the metering level to stdout, as well as displaying the current recording time. Yes, string formatting is awkward in Swift. Have a better way? Let me know.

func updateAudioMeter(timer:NSTimer) {
   if recorder.recording {
      let dFormat = "%02d"
      let min:Int = Int(recorder.currentTime / 60)
      let sec:Int = Int(recorder.currentTime % 60)
      let s = "\(String(format: dFormat, min)):\(String(format: dFormat, sec))"
      statusLabel.text = s
      recorder.updateMeters()
      var apc0 = recorder.averagePowerForChannel(0)
      var peak0 = recorder.peakPowerForChannel(0)
print them out...
   }
}

Table of Contents

Summary

That’s it. You now have an audio recording that you can play back using an AVAudioPlayer instance.

Resources

Posted in Computer Music, Swift | Tagged , | Leave a comment

Swift AVFoundation to play audio or MIDI

Swift Language

Swift AVFoundation

There are many ways to play sound in iOS. Core Audio has been around for a while and it is very powerful. It is a C API, so using it from Objective-C and Swift is possible, but awkward. Apple has been moving towards a higher level API with AVFoundation. Here I will summarize how to use AVFoundation for several common audio tasks.

N.B. Some of these examples use new capabilities of iOS 8.

Playing an Audio file

Let’s start by loading an audio file with an AVAudioPlayer instance. There are several audio formats that the player will grok. I had trouble with a few MP3 files that played in iTunes or VLC, but caused a cryptic exception in the player. So, check your source audio files first.

If you want other formats, your Mac has a converter named afconvert. See the man page.

afconvert -f caff -d LEI16 foo.mp3 foo.caf

Let’s go step by step.

Get the file URL.

let fileURL:NSURL = NSBundle.mainBundle().URLForResource("modem-dialing-02", withExtension: "mp3")

Create the player. You will need to make the player an instance variable, because if you just use a local variable, it will be popped off the stack before you hear anything.

var error: NSError?
self.avPlayer = AVAudioPlayer(contentsOfURL: fileURL, error: &error)
if avPlayer == nil {
   if let e = error {
      println(e.localizedDescription)
   }
}

You can provide the player a hint for how to parse the audio data. There are several constants you can use.

self.avPlayer = AVAudioPlayer(contentsOfURL: fileURL, fileTypeHint: AVFileTypeMPEGLayer3, error: &error)

Now configure the player. prepareToPlay() “pre-rolls” the audio file to reduce start up delays when you finally call play().
You can set the player’s delegate to track status.

avPlayer.delegate = self
avPlayer.prepareToPlay()
avPlayer.volume = 1.0

To set the delegate you have to make a class implement the player delegate protocol. My class has the clever name “Sound”.

// MARK: AVAudioPlayerDelegate
extension Sound : AVAudioPlayerDelegate {
    func audioPlayerDidFinishPlaying(player: AVAudioPlayer!, successfully flag: Bool) {
        println("finished playing \(flag)")
    }
     func audioPlayerDecodeErrorDidOccur(player: AVAudioPlayer!, error: NSError!) {
        println("\(error.localizedDescription)")
    }
}

Finally, the transport controls that can be called from an action.

func stopAVPLayer() {
   if avPlayer.playing {
      avPlayer.stop()
   }
}
 
func toggleAVPlayer() {
   if avPlayer.playing {
      avPlayer.pause() 
   } else {
      avPlayer.play()
   }
}

The complete gist for the AVAudioPlayer:

Audio Session

The Audio Session singleton is an intermediary between your app and the media daemon. Your app and all other apps (should) make requests to the shared session. Since we are playing an audio file, we should tell the session that is our intention by requesting that its category be AVAudioSessionCategoryPlayback, and then make the session active. You should do this in the code above right before you call play() on the player.

Setting a session for playback.

Go to Table of Contents

Playing a MIDI file

You use AVMIDIPlayer to play standard MIDI files. Loading the player is similar to loading the AVAudioPlayer. You need to load a soundbank from a Soundfont or DLS file. The player also has a pre-roll prepareToPlay() function.

I’m not interested in copyright infringement, so I have not included either a DLS or SF2 file. So do a web search for a GM SoundFont2 file. They are loaded in the same manner. I’ve tried the MuseCore SoundFont and it sounds ok. There is probably a General MIDI DLS on your OSX system already: /System/Library/Components/CoreAudio.component/Contents/Resources/gs_instruments.dls. Copy this to the project bundle if you want to try it.

self.soundbank = NSBundle.mainBundle().URLForResource("GeneralUser GS MuseScore v1.442", withExtension: "sf2")
// a standard MIDI file.
var contents:NSURL = NSBundle.mainBundle().URLForResource("ntbldmtn", withExtension: "mid")
var error:NSError?
self.mp = AVMIDIPlayer(contentsOfURL: contents, soundBankURL: soundbank, error: &error)
if self.mp == nil {
   println("nil midi player")
}
if let e = error {
   println("Error \(e.localizedDescription)")
}
self.mp.prepareToPlay()

You can also load the MIDI player with an NSData instance like this:

var data = NSData(contentsOfURL: contents)
var error:NSError?
self.mp = AVMIDIPlayer(data: data, soundBankURL: soundbank, error: &error)

Cool, so besides getting the data from a file, how about creating a sequence on the fly? There are the Core Audio MusicSequence and MusicTrack classes to do that. But damned if I can find a way to turn the sequence into NSData. Do you? FWIW, the AVAudioEngine q.v. has a barely documented musicSequence variable. Maybe we can use that in the future.

In your action, call the play() function on the player. There is only one play function, and that requires a completion handler.

self.mp.play({
   println("midi done")
})

Complete AVMIDIPlayer example gist.

Go to Table of Contents

Audio Engine

iOS 8 introduces a new audio engine which seems to be the successor to Core Audio’s AUGraph and friends. See my article on using these classes in Swift.

The new AVAudioEngine class is the analog to AUGraph. You create AudioNode instances and attach them to the engine. Then you start the engine to initiate data flow.

Here is an engine that has a player node attached to it. The player node is attached to the engine’s mixer. These are instance variables.

engine = AVAudioEngine()
playerNode = AVAudioPlayerNode()
engine.attachNode(playerNode)
mixer = engine.mainMixerNode
engine.connect(playerNode, to: mixer, format: mixer.outputFormatForBus(0))

Then you need to start the engine.

var error:NSError?
if !engine.startAndReturnError(&error) {
   println("error couldn't start engine")
   if let e = error {
      println("error \(e.localizedDescription)")
   }
}

Cool. Silence.

Let’s give it something to play. It can be an audio file, or as we’ll see, a MIDI file or a computed buffer.
In this example we create an AVAudioFile instance from an MP3 file, and tell the playerNode to play it.

First, load an audio file. If you know the format of the file you can provide hints.

let fileURL = NSBundle.mainBundle().URLForResource("modem-dialing-02", withExtension: "mp3")
var error: NSError?
let audioFile = AVAudioFile(forReading: fileURL, error: &error)
// OR
//let audioFile = AVAudioFile(forReading: fileURL, commonFormat: .PCMFormatFloat32, interleaved: false, error: &error)
if let e = error {
   println(e.localizedDescription)
}

Now hand the audio file to the player node by “scheduling” it, then playing it.

engine.connect(playerNode, to: engine.mainMixerNode, format: audioFile.processingFormat)
playerNode.scheduleFile(audioFile, atTime:nil, completionHandler:nil)
if engine.running {
   playerNode.play()
} else {
   if !engine.startAndReturnError(&error) {
      println("error couldn't start engine")
      if let e = error {
         println("error \(e.localizedDescription)")
      }
   } else {
      playerNode.play()
   }
}

Go to Table of Contents

Playing MIDI Notes

How about triggering MIDI notes/events based on UI events? You need an instance of AVAudioUnitMIDIInstrument among your nodes. There is one concrete subclass named AVAudioUnitSampler. Create a sampler and attach it to the engine.

sampler = AVAudioUnitSampler()
engine.attachNode(sampler)
engine.connect(sampler, to: engine.outputNode, format: nil)

At init time, create a URL to your SoundFont or DLS file as we did previously.

soundbank = NSBundle.mainBundle().URLForResource("GeneralUser GS MuseScore v1.442", withExtension: "sf2")

Then in your UI’s action function, load the appropriate instrument into the sampler. The program parameter is a General MIDI instrument number. You might want to set up constants. Soundbanks have banks of sound. You need to specify which bank to use with the bankMSB and bankLSB. I use a Core Audio constant here to choose the “melodic” bank and not the “percussion” bank.

// probably instance variables
let melodicBank:UInt8 = UInt8(kAUSampler_DefaultMelodicBankMSB)
let gmMarimba:UInt8 = 12
let gmHarpsichord:UInt8 = 6
 
// then in the action
var error:NSError?
if !sampler.loadSoundBankInstrumentAtURL(soundbank, program: gmHarpsichord,
            bankMSB: melodicBank, bankLSB: 0, error: &error) {
   println("could not load soundbank")
}
if let e = error {
   println("error \(e.localizedDescription)")
}

Then send a MIDI program change to the sampler. After that, you can send startNote and stopNote messages to the sampler. You need to match the parameters for each start and stop message.

self.sampler.sendProgramChange(gmHarpsichord, bankMSB: melodicBank, bankLSB: 0, onChannel: 0)
// play middle C, mezzo forte on MIDI channel 0
self.sampler.startNote(60, withVelocity: 64, onChannel: 0)
...
// in another action
self.sampler.stopNote(60, onChannel: 0)

Go to Table of Contents

Summary

This is a good start I hope. There are other things I’ll cover soon, such as generating and processing the audio buffer data.

Resources

Go to Table of Contents

Posted in Swift | Tagged , | 7 Responses

Swift Dropbox quick tip

Swift Language

Quick Tip

If you’re going to use the Dropbox API in your Swift app, you will need a bridging header. If you don’t have one, just create a dummy Objective-C class and you will be prompted to have one created for you. Then delete the dummy class.

Then add DropboxSDK.h to the bridging header.
Blammo. Syntax errors in the framework. Doesn’t know things like NSCoding.
The current headers have this all over the place: #ifdef __OBJC__
Well, we’re not in Objective-C anymore Toto.

So, add a few more imports like this to the bridging header:

#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#import <DropboxSDK/DropboxSDK.h>
Posted in Swift | Tagged , | Leave a comment

Swift documentation

Swift Language

In the release notes for XCode 6 beta 5, they mention that they are using reStructuredText (quick reference) for javadoc style documentation.

It has a long way to go, but it’s a start.

Like Java, you can create documentation blocks like this:

/** 
whatever
*/

N.B. For old timey Objective-C guys, HeaderDoc uses /*! instead of /**

Or you can use three virgules (///) at the beginning of a line for single liners. Open Quick Help Inspector to see the comments, or option click on a variable/func of the type you’re documenting.

The notes state

Currently only block-level markup is supported (nested bullet and enumerated lists, field lists) [are supported].

Let’s see some field lists

/** 
This is a utility class to help clean the litterbox.
:Author: Gene De Lisa
:Version: 1.0 of 2014/08/05
:Dedication: To my cat, Giacomo.
*/
class LitterBox {
...

How about bullet lists? Yes they work.

/**
The metadata retrieved from the ipod library.
 
- albumTitle
- songs
- artwork
*/
struct AlbumInfo {
...

Enumerated lists? Not so much, even though they say they do. And other formatting like *bold* and **bolder** doesn’t work (they didn’t say it would yet).

You can use field lists for param and returns like this:

 /**
    Queries the library for an artist name containing the parameter
 
    :param: artist The artist name
 
    :returns: Nothing. The delegate is notified.
    */
    func albums(artist:String) {
...

Formatting code in your comments? I don’t see anything yet. Also, in build settings, you can ask to be warned about Documentation Comments that are invalid. That doesn’t work for these comments yet.

Posted in Swift | Tagged , , | Leave a comment

Swift: workaround for closure as protocol

Swift Language

In Java, it was common to implement callbacks as anonymous classes. It was nice to be able to define the callback right where you define the button.

Here is a simple JButton ActionListener example:

JButton button = new JButton("Press me");
button.addActionListener(new ActionListener() {
  public void actionPerformed(ActionEvent e) {
    System.out.println("You clicked the button");
  }
});

Wouldn’t it be cool to do the same thing in Swift?

Let’s create a Swift protocol. Just a single method that takes an array of Strings.

protocol SimpleProtocol {
  func didReceiveResults(s: [String]) -> Void
}

Now a method in that uses the protocol.

func frob(s:String, delegate: SimpleProtocol) {
  println(s)
  delegate.didReceiveResults(["foo", "bar"])
}

Straightforward stuff so far.

Now let’s try calling our method using a trailing closure for the delegate.

frob("hi") { (a:[String]) in
 
}

Blammo. Can’t do it.
How about leaving out the params like this?

frob("hi") {
   // use $0 for the params
}

Nope. I am disappoint. I wish I could do either of these. So what can we do?

I guess we need a named class the implements the protocol.

class NonAnon:SimpleProtocol {
    func didReceiveResults(s: [String]) -> Void {
        println(s)
        for str in s {
            println(str)
        }
    }
}

Then

 var resp:NonAnon = NonAnon()
self.frob("handler", delegate:resp)
// or typed to the protocol
 var handler:SimpleProtocol = resp
self.frob("handler", delegate:handler)

Yeah, ok. What if you want to call a method from the calling class?
One way is to pass in the calling class (in this case that is ViewController which has a method named blob()).

class NonAnonDelegate:SimpleProtocol {
    var vc:ViewController?
 
    init(c:ViewController) {
        self.vc = c
    }
    func didReceiveResults(s: [String]) -> Void {
        vc?.blob()
    }
}

In a Java anonymous inner class you have access to the methods and variables of the outer class. Will that work?

class ViewController {
blah blah
 
func blob() {}
 
 class NonAnonDelegate:SimpleProtocol {
    var vc:ViewController?
 
    init(c:ViewController) {
        self.vc = c
    }
    func didReceiveResults(s: [String]) -> Void {
        // ok
        vc?.blob()
        // blows up XCode
         blob()
    }
 }
}

Nope. The Swift compiler barfs on its shoes and the XCode editor goes into convulsions.
So, yes, it’s ugly. I’ll be using the NonAnonDelegate:SimpleProtocol version. Swift nested classes don’t have access to the outer class, but you still are able to define them near where they are needed.

Unless you know a better way. Please let me know.

Download the project from this Github repository.

Posted in Swift | Tagged , , | Leave a comment

UIActivityViewController in Swift

Swift Language

Just a simple example of using the UIActivityViewController in Swift.

You need to be logged into Facebook and Twitter for the options to share to them shows up. You can do this in the simulator by going to the settings app. If you do not log in, then you will just see Mail and Copy as options.

Set up

All that is needed is to instantiate UIActivityViewController with the item[s] to share, then present it. Here is a simple example:

let someText:String = textView.text
let google:NSURL = NSURL(string:"http://google.com/")
 
// let's add a String and an NSURL
let activityViewController = UIActivityViewController(
            activityItems: [someText, google],
            applicationActivities: nil)
self.navigationController.presentViewController(activityViewController, 
   animated: true, 
   completion: nil)

You can specify that there are sharing options that you do not want like this. This is all of them, so in real life you wouldn’t want to exclude them all.

activityViewController.excludedActivityTypes =  [
            UIActivityTypePostToTwitter,
            UIActivityTypePostToFacebook,
            UIActivityTypePostToWeibo,
            UIActivityTypeMessage,
            UIActivityTypeMail,
            UIActivityTypePrint,
            UIActivityTypeCopyToPasteboard,
            UIActivityTypeAssignToContact,
            UIActivityTypeSaveToCameraRoll,
            UIActivityTypeAddToReadingList,
            UIActivityTypePostToFlickr,
            UIActivityTypePostToVimeo,
            UIActivityTypePostToTencentWeibo
        ]

If you want to do something upon completion, install a handler like this:

activityViewController.completionHandler = {(activityType, completed:Bool) in
            if !completed {
                println("cancelled")
                return
            }
 
            if activityType == UIActivityTypePostToTwitter {
                println("twitter")
            }
 
            if activityType == UIActivityTypeMail {
                println("mail")
            }
        }

Download the project from this Github repository.

Posted in Swift | Tagged , , | Leave a comment

JavaScript/AngularJS adventures of a Java guy Part 3 – Grunt

grunt

In Part 2, I talked about managing dependencies with Bower. The next step is to see how to execute build tasks.

For example, when you build with Maven, one of the default behaviors is to copy the contents of src/resources to your build directory (target). We can do things like that with a tool named Grunt.

Grunt

There are currently two task tools for JavaScript. (I haven’t looked in the past 2 minutes, but there may be more now.) Grunt is the one that is most in use right now. Grunt uses JSON to configure the tasks. Gulp is a newer tool that uses JavaScript for configuration. Gulp benefits from hindsight and tries to avoid some of Grunt’s inconveniences. Personally, I don’t think you can go wrong using either tool.

Installation

If you’ve followed along with the previous blog posts, you already have npm installed. Grunt is installed globally (not per project) via npm with this incantation:

npm install -g grunt-cli

This “command line interface (cli)” puts the command “grunt” on your path, so you do not have to type “npm grunt” or even worse “./node_modules/.bin/grunt”.

The minimal project

You need two files in your project: Grunt’s Gruntfile.js which describes the tasks you wish to execute, and npm’s package.json, which specifies general project information along with dependencies.

You can create a package.json file by hand, or by invoking npm init and answering the prompts.
Here is a simple “by hand” package.json:

{
    "name": "grunt01",
    "version": "0.1.0"
}

To use Grunt, you will need to add it to package.json as a development dependency, and then install it. This can be accomplished with the following command.

npm install grunt --save-dev

List the directory node_modules, and you’ll see grunt there. (You may be a later version, of course).

Now take a look at package.json to see the added dependency.

{
    "name": "grunt01",
    "version": "0.1.0",
    "devDependencies": {
        "grunt": "~0.4.4"
    }
}

The gruntfile can be written in JavaScript or CoffeeScript. I’ll just use JavaScript here.

Here is a simple Gruntfile.js – that does nothing! (Actually, it registers a default task that does nothing).

module.exports = function (grunt) {
    grunt.registerTask('default', []);
};

Each Gruntfile.js will have this format. You can run this by typing:

grunt --verbose

Adding Grunt Tasks

Grunt tasks are defined by plugins. There is a registry of Grunt plugins.

Let’s start with the concat task.
Install the task.

npm install grunt-contrib-concat --save-dev

This updates node_modules (go ahead and list it), and modifies package.json:

{
  "name": "grunt01",
  "version": "0.1.0",
  "devDependencies": {
    "grunt": "^0.4.4",
    "grunt-contrib-concat": "^0.4.0"
  }
}

This is what you will do with each Grunt plugin: npm install pluginname –save-dev

Now in Gruntfile.js, you will have to add this task.
grunt.loadNpmTasks(‘grunt-contrib-concat’);
That is not enough. We need to tell the task what files it should concatenate, and where it should put the output. You configure all tasks in grunt by invoking grunt.initConfig(configObject); The configObject contains the configuration for each task you’re going to use. Here is an example for concat.

module.exports = function (grunt) {
 
  grunt.initConfig({
    concat: {
      dist: {
        src: ['a.js', 'b.js'],
        dest: 'built.js'
      }
    }
  });
 
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.registerTask('default', ['concat']);
};

In grunt.initConfig, I’m defining a target named dist in the concat task. You can specify multiple targets. At a minimum you need to specify the source files and the destination as shown. These file can be – and probably are – in subdirectories such as src/a.js and distribution/built.js. You can also specify file globbing of course.

Because I defined the “default” task as containing ‘concat’, you can run this by simply typing grunt. You will have multiple tasks in a real build, so to execute a specific task you can type the name: grunt concat.

Here is the concat task with two targets: dist and bar. Note that they write to the same destination file. When you run grunt concat, all the targets are executed. That means in this case that the output of dist will be overwritten by the output of bar. So, be careful here; I wouldn’t write targets like this in a real project. If you want to run just one target, type grunt concat:dist or grunt concat:bar.

module.exports = function (grunt) {
 
  grunt.initConfig({
    concat: {
      dist: {
        src: ['a.js', 'b.js'],
        dest: 'built.js'
      },
      bar: {
        src: ['b.js', 'a.js'],
        dest: 'built.js'
      }
    }
  });
 
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.registerTask('default', ['concat']);
};

Copy task

To copy files, install the copy task, then load it into your gruntfile as you did with the concat task using loadNpmTasks.

npm install grunt-contrib-copy --save-dev

And the Gruntfile with the two tasks loaded and configured.

module.exports = function (grunt) {
 
  grunt.initConfig({
    concat: {
      dist: {
        src: ['a.js', 'b.js'],
        dest: 'built.js'
      },
      bar: {
        src: ['b.js', 'a.js'],
        dest: 'built.js'
      }
    },
    copy: {
      main: {
        src: 'src/*',
        dest: 'dest/'
      }
    }
 
  });
 
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.loadNpmTasks('grunt-contrib-copy');
  grunt.registerTask('default', ['concat']);
};

Live reloading

You know how you change your html, css, or js files in your editor, then switch to your browser and hit reload to see the new wonders you have just wrought?
Wouldn’t it be nice to have your browser reload automatically? That’s what the watch task is for.
(The livereload task has been deprecated in favor of the watch task).

Do the usual installation chacha:

npm install grunt-contrib-watch --save-dev

Then load it into your gruntfile:

grunt.loadNpmTasks(‘grunt-contrib-watch’);

Install the LiveReload Chrome Extension. Configure the extension to “Allow access to file URLs”.

Scaffolding

You can get a bit of help from
grunt-init
which reads “templates” that you install into ~/.grunt-init. They are sort of like Maven archetypes.
First install grunt-init

npm install -g grunt-init

Now install a template. Here is one that just sets up the gruntfile.

git clone https://github.com/gruntjs/grunt-init-gruntfile.git ~/.grunt-init/gruntfile

then use it

grunt-init gruntfile
Posted in Javascript, Tooling | Tagged , | Leave a comment

Swift REPL

Swift Language

Back in the early 80s I learned the power of LISP’s REPL (Read-Eval-Print-Loop). (As a music major, mind you). Clojure gave us a REPL for the Java JVM. Kids today use Python’s REPL. Swift playgrounds are peachy, but are a bit buggy right now.

Set up

First, you need to say you want to use XCode 6 beta’s Developer Toolchain. Here is the incantation.

sudo xcode-select -s /Applications/Xcode6-Beta.app/Contents/Developer/

(or Xcode6-Beta2.app if you have that installed now)

And to go back to XCode 5:

sudo xcode-select -s /Applications/Xcode.app/Contents/Developer/

You might want to define aliases for those in your shell’s .rc file. You could also just set your shell’s DEVELOPER_DIR environment variable to point to it too.

DEVELOPER_DIR=/Applications/Xcode6-Beta.app/Contents/Developer/ xcrun swift

Using the REPL

Find out which SDKs are available:

xcodebuild -showsdks

Then run with one of the listed SDKs.

xcrun --sdk iphoneos8.0 swift

or

xcrun --sdk macosx10.10 swift

Read the man page xcrun(1) for more info.

man 1 xcrun

Once inside the REPL, you can type :help, or :quit to exit. Control-D works for me too to exit.

xcrun --sdk iphoneos8.0 swift
Welcome to Swift!  Type :help for assistance.
  1> var a = 2
a: Int = 2
  2> a + a
$R1: Int = 4
  3> func foo() -> String {return "hello"}
  4> foo()
$R2: String = "hello"
  5> var re = foo()
re: String = "hello"
  6> re
$R3: String = "hello"
  7> println(re)
hello
:quit

Use Swift in a shell script? Chris Lattner tweeted this.

Here’s a simple script.

#!/usr/bin/env xcrun swift -i
 
println("Hello World!")
Posted in Swift | Tagged , , , | Leave a comment

Swift and Core Audio

Swift Language

Like many of you, I’ve been knee deep into Swift this week. Once you get beyond the hello world things and try something a bit more complicated, you start to learn the language. So, why not go off the deep end and try to work with what is essentially a C library: Core Audio? Turns out that’s a good way to get the types grokked.

First, don’t try to use core audio in a Swift playground. I wasted a day trying to do this. It doesn’t work yet. So, create a project.

I couldn’t do something as simple as this in the playground:

var processingGraph:AUGraph = AUGraph()

So, ok I put that in a Swift class as an instance variable and created it in the init function.

Most core audio functions return a status code which is defined as OSStatus. You need to the type on the var.

var status : OSStatus = 0
status = NewAUGraph(&processingGraph)

Or, if you want, you can cast noErr like this.

var status : OSStatus = OSStatus(noErr)

Here’s an adventure with Boolean.

The function AUGraphIsInitialized is defined like this:

func AUGraphIsInitialized(inGraph: AUGraph, outIsInitialized: CMutablePointer) > OSStatus

So, you call it like this:

var status : OSStatus = OSStatus(noErr)
var outIsInitialized:Boolean = 0
status = AUGraphIsInitialized(self.processingGraph, &outIsInitialized)

That works. But how do you check it?

Boolean is defined as an CUnsignedChar (in MacTypes.h)

So, you cannot do this:

if outIsInitialized {
    // whatever
}

And you cannot cast it (could not find an overload…)

var b:Bool = Bool(outIsInitialized)

or with Swift’s “as”

var b:Bool = outIsInitialized as Bool

I’m clearly overthinking this. Because this is all that is needed. D’oh!

var outIsInitialized:Boolean = 0
status = AUGraphIsInitialized(self.processingGraph, &outIsInitialized)
if outIsInitialized == 0 {
     status = AUGraphInitialize(self.processingGraph)
     CheckError(status)
}

Another problem I had was using constants such as kAudioUnitSubType_Sampler while trying to create an AudioComponentDescription. The trick was to simply cast to OSType.

var cd:AudioComponentDescription = AudioComponentDescription(componentType: OSType(kAudioUnitType_MusicDevice),componentSubType: OSType(kAudioUnitSubType_Sampler),componentManufacturer: OSType(kAudioUnitManufacturer_Apple),componentFlags: 0,componentFlagsMask: 0)
        status = AUGraphAddNode(self.processingGraph, &cd, &samplerNode)
        CheckError(status)

Here is my Github repository for this project. It’s a simple 3 button iPhone app that plays sine tones (based on the button’s tag value).

Posted in Swift | Tagged , | 7 Responses

Swift language video download problem

Swift Language

There are quite a few WWDC videos covering Swift. It would be nice if I could view them on my Apple TV 3 with the Apple Events app. Unfortunately, the only thing available there are the WWDC keynotes. Not sure I can take more than 5 minutes of hearing “beautiful” and “gorgeous”.

So, I wanted to download them and then add them to my iMac’s iTunes library. Then I could watch them on the ATV3. The problem is, they kept timing out. Then I tried to view them in the browser (Chrome). The error message suggested that I use Safari. I did, and they downloaded just fine. That’s not really a nice trick they’re pulling there, but now you know the solution to the download problem.

Posted in Swift | Tagged , , | Leave a comment