Exploring iPhone Audio Part 1

Old Fashioned Microphone
The iPhone has the ability to record and playback audio in various formats. This functionality has uses in a multitude of different applications. You could create a simple audio clip recording and playback application or a full blown audio conferencing system or software to record and automatically upload audio clips to a blog just to name a few.

This series of articles will explore the Audio API available in the iPhone SDK. We will delve into recording and playing back audio in various formats as well as how to store audio data in the iPhone flash memory.

This article will focus on recording audio. We will look at the code required to prepare for recording audio. Preparing to record audio data consists of creating an audio input queue with the desired audio format.

The first task is to set up a data structure to hold information about the audio recording session. This structure is bare bones for now but we’ll add more to it in later articles.

1
2
3
4
5
6
7
8
#define NUM_BUFFERS 3
 
typedef struct
{
    AudioStreamBasicDescription dataFormat;
    AudioQueueRef queue;
    AudioQueueBufferRef buffers[NUM_BUFFERS];
} RecordState;

dataFormat is a structure that defines the format of the audio data to be captured. This structure is used to set the sample rate, mono vs. stereo, etc.

queue is a reference to the audio input queue that we’ll initialize later. All subsequent audio API calls used for recording will reference this queue.

buffers is an array of buffers that audio data will be written to.

Here is an example of populating the dataFormat structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
RecordState recordState;
 
recordState.dataFormat.mSampleRate = 8000.0;
recordState.dataFormat.mFormatID = kAudioFormatLinearPCM;
recordState.dataFormat.mFramesPerPacket = 1;
recordState.dataFormat.mChannelsPerFrame = 1;
recordState.dataFormat.mBytesPerFrame = 2;
recordState.dataFormat.mBytesPerPacket = 2;
recordState.dataFormat.mBitsPerChannel = 16;
recordState.dataFormat.mReserved = 0;
recordState.dataFormat.mFormatFlags =
                kLinearPCMFormatFlagIsBigEndian |
                kLinearPCMFormatFlagIsSignedInteger |
                kLinearPCMFormatFlagIsPacked;

mSampleRate is the number of samples that will be captured per second. 8000 is suitable for voice recording. 44100 samples per second is used for Audio CD quality recording.

mFormatID is used to set the codec (compressor/decompressor) that will be used for recording. PCM stands for Pulse-code modulation and is uncompressed audio data. Several other compressed formats are available.

mChannelsPerFrame is set to 1 indicating mono recording.

mBitsPerChannel is set to 16 for 16 bit recording.

mBytesPerPacket is 2 because mono recording at 16 bits will produce 2 bytes per packet.

The remaining settings will be discussed in later articles.

These parameters set up recording of PCM (raw audio) data at 8000 samples per second, mono with 16 bits per sample. These settings are suitable for voice recording. This setup will produce 16000 bytes of data per second.

The next article in this series will demonstrate enqueuing packets to the audio input queue and starting the recording.

Share and Enjoy:
  • StumbleUpon
  • Digg
  • Reddit
  • del.icio.us
  • Suggest to Techmeme via Twitter
  • Technorati
  • Slashdot
  • HackerNews
  • Twitter
  • Facebook
  • Print
You can leave a response, or trackback from your own site.

6 Responses to “Exploring iPhone Audio Part 1”

  1. [...] Last time we created the RecordState structure to keep track of the recording state. We also configured the recording parameter to record 8000 samples per second, 16 bit, mono audio. [...]

  2. peeInMyPantz says:

    what do I need modify to let this code play AMR file?

  3. Gilbert Mackall says:

    Can’t get the iPhone to accept this description

    aqData.mDataFormat.mSampleRate = 8000.0;
    aqData.mDataFormat.mFormatID = kAudioFormatLinearPCM;
    aqData.mDataFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsBigEndian ;
    //aqData.mDataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger| kLinearPCMFormatFlagIsPacked;
    aqData.mDataFormat.mFramesPerPacket = 1;
    aqData.mDataFormat.mChannelsPerFrame = 1;
    aqData.mDataFormat.mBitsPerChannel = 32;
    aqData.mDataFormat.mBytesPerPacket = 4;
    aqData.mDataFormat.mBytesPerFrame = 4;

  4. nebos says:

    any way to control the audio input, or is the built-in mic the only option (is there an audio line input in the UDC?)

    any way to control the actual gain of said input to compensate for very loud sounds in the external environment?

  5. Josh says:

    Hi, has anyone tried to play two audio files at the same time? if you are asking why I need this, is to make the fade out of the current song while a new one starts playing. I have tried different things with no success on the phone (it worked on the simulator). any help would be greatly appreciated.

  6. Tom says:

    This is a great article, but it does fail to mention two things in setting up for this code:

    1. You have to add the Audio Tool box include files
    2. You have to add the Audio Toolbox Framework to your project (for the linker)

Leave a Reply

*