I will briefly go over all the elements of my app:

I have an application that records audio to an AVAudioPCMBuffer. This buffer is then converted to NSData and then to [UInt8]. It is then streamed over an OutputStream. On another device, this data is received using an InputStream. Then it is converted to NSData, and back to an AVAudioPCMBuffer. This buffer is then played.

The issue is that the audio is very jittery and you can't make out voices, only that the audio gets louder or quieter depending on if the other person is talking.

When scheduling the buffer:

self.peerAudioPlayer.scheduleBuffer(audioBuffer, completionHandler: nil)

I have delayed playing this audio for a few seconds and then played it, hoping that this would make the audio clearer, however it did not help. My best guess is that the buffer I'm creating is somehow cutting off some of the audio. So I will show you my relevant code:

Here is how I record audio:

localInput?.installTap(onBus: 1, bufferSize: 4096, format: localInputFormat) { (buffer, when) -> Void in let data = self.audioBufferToNSData(PCMBuffer: buffer) let output = self.outputStream!.write(data.bytes.assumingMemoryBound(to: UInt8.self), maxLength: data.length) }

audioBufferToNSData is just a method which converts AVAudioPCMBuffer to NSData and here it is:

func audioBufferToNSData(PCMBuffer: AVAudioPCMBuffer) -> NSData { let channelCount = 1 // given PCMBuffer channel count is 1 let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: channelCount) let data = NSData(bytes: channels[0], length:Int(PCMBuffer.frameCapacity * PCMBuffer.format.streamDescription.pointee.mBytesPerFrame)) return data }

I'm wondering if the issue could be at the method above. Possibly when I calculate the length of the NSData object, maybe I am cutting off part of the audio.

On the receiving end I have this:

case Stream.Event.hasBytesAvailable: DispatchQueue.global().async { var tempBuffer: [UInt8] = .init(repeating: 0, count: 17640) let length = self.inputStream!.read(&tempBuffer, maxLength: tempBuffer.count) self.testBufferCount += length self.testBuffer.append(contentsOf: tempBuffer) if (self.testBufferCount >= 17640) { let data = NSData.init(bytes: &self.testBuffer, length: self.testBufferCount) let audioBuffer = self.dataToPCMBuffer(data: data) self.peerAudioPlayer.scheduleBuffer(audioBuffer, completionHandler: nil) self.testBuffer.removeAll() self.testBufferCount = 0 } }

The reason I check for 17640 is because the data being sent is exactly 17640 bytes, so I need to get all of this data before I play it.

Furthermore, the dataToPCMBuffer method just converts NSData to an AVAudioPCMBuffer so that it can be played. Here is that method:

func dataToPCMBuffer(data: NSData) -> AVAudioPCMBuffer { let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false) // given NSData audio format let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.pointee.mBytesPerFrame) audioBuffer.frameLength = audioBuffer.frameCapacity let channels = UnsafeBufferPointer(start: audioBuffer.floatChannelData, count: Int(audioBuffer.format.channelCount)) data.getBytes(UnsafeMutableRawPointer(channels[0]) , length: data.length) return audioBuffer }

Thank you in advance!