mirror of
https://git.eden-emu.dev/eden-emu/eden.git
synced 2025-07-20 14:05:45 +00:00
Move dead submodules in-tree
Signed-off-by: swurl <swurl@swurl.xyz>
This commit is contained in:
parent
c0cceff365
commit
6c655321e6
4081 changed files with 1185566 additions and 45 deletions
49
externals/oboe/docs/AndroidAudioHistory.md
vendored
Normal file
49
externals/oboe/docs/AndroidAudioHistory.md
vendored
Normal file
|
@ -0,0 +1,49 @@
|
|||
Android audio history
|
||||
===
|
||||
|
||||
A list of important audio features, bugs, fixes and workarounds for various Android versions. [(List of all Android Versions)](https://developer.android.com/guide/topics/manifest/uses-sdk-element#ApiLevels)
|
||||
|
||||
### 11.0 R - API 30
|
||||
- Bug in **AAudio** on RQ1A (Oboe works around this issue from version 1.5 onwards). A stream is normally disconnected when a **headset is plugged in** or because of other device changes. An `AAUDIO_ERROR_DISCONNECTED` error code should be passed to the error callback. But a bug in Shared MMAP streams causes `AAUDIO_ERROR_TIMEOUT` to be returned. So if your error callback is checking for `AAUDIO_ERROR_DISCONNECTED` then it may not respond properly. We recommend **always stopping and closing the stream** regardless of the error code. Oboe does this. So if you are using Oboe callbacks you are OK. This issue was not in the original R release. It was introduced in RQ1A, which is being delivered by OTA starting in November 2020. It will be fixed in a future update. Follow it on [this public Android issue](https://issuetracker.google.com/173928197).
|
||||
|
||||
- Fixed. A race condition in AudioFlinger could cause an assert in releaseBuffer() when a headset was plugged in or out. More details [here](https://github.com/google/oboe/wiki/TechNote_ReleaseBuffer)
|
||||
|
||||
### 10.0 Q - API 29
|
||||
- Fixed: Setting capacity of Legacy input streams < 4096 can prevent use of FAST path. https://github.com/google/oboe/issues/183. Also fixed in AAudio with ag/7116429
|
||||
- Add InputPreset:VoicePerformance for low latency recording.
|
||||
- Regression bug: [AAudio] Headphone disconnect event not fired for MMAP streams. See P item below. Still in first Q release but fixed in some Q updates.
|
||||
|
||||
### 9.0 Pie - API 28 (August 6, 2018)
|
||||
- AAudio adds support for setUsage(), setSessionId(), setContentType(), setInputPreset() for builders.
|
||||
- Regression bug: [AAudio] Headphone disconnect event not fired for MMAP streams. Issue [#252](https://github.com/google/oboe/issues/252) Also see tech note [Disconnected Streams](https://github.com/google/oboe/wiki/TechNote_Disconnect).
|
||||
- AAudio input streams with LOW_LATENCY will open a FAST path using INT16 and convert the data to FLOAT if needed. See: https://github.com/google/oboe/issues/276
|
||||
|
||||
### 8.1 Oreo MR1 - API 27
|
||||
- Oboe uses AAudio by default.
|
||||
- AAudio MMAP data path enabled on Pixel devices. PerformanceMode::Exclusive supported.
|
||||
- Fixed: [AAudio] RefBase issue
|
||||
- Fixed: Requesting a stereo recording stream can result in sub-optimal latency.
|
||||
|
||||
### 8.0 Oreo - API 26 (August 21, 2017)
|
||||
- [AAudio API introduced](https://developer.android.com/ndk/guides/audio/aaudio/aaudio)
|
||||
- Bug: RefBase issue causes crash after stream closed. This why AAudio is not recommended for 8.0. Oboe will use OpenSL ES for 8.0 and earlier.
|
||||
https://github.com/google/oboe/issues/40
|
||||
- Bug: Requesting a stereo recording stream can result in sub-optimal latency. [Details](https://issuetracker.google.com/issues/68666622)
|
||||
|
||||
### 7.1 Nougat MR1 - API 25
|
||||
- OpenSL adds supports for setting and querying of PerformanceMode.
|
||||
|
||||
### 7.0 Nougat - API 24 (August 22, 2016)
|
||||
- OpenSL method `acquireJavaProxy` added, which allows the Java AudioTrack object associated with playback to be obtained (which allows underrun count).
|
||||
|
||||
### 6.0 Marshmallow - API 23 (October 5, 2015)
|
||||
- Floating point recording supported. But it does not allow a FAST "low latency" path.
|
||||
- [MIDI API introduced](https://developer.android.com/reference/android/media/midi/package-summary)
|
||||
- Sound output is broken on the API 23 emulator
|
||||
|
||||
### 5.0 Lollipop - API 21 (November 12, 2014)
|
||||
- Floating point playback supported.
|
||||
|
||||
|
||||
|
||||
|
43
externals/oboe/docs/FAQ.md
vendored
Normal file
43
externals/oboe/docs/FAQ.md
vendored
Normal file
|
@ -0,0 +1,43 @@
|
|||
# Frequently Asked Questions (FAQ)
|
||||
|
||||
## Can I write audio data from Java/Kotlin to Oboe?
|
||||
|
||||
Oboe is a native library written in C++ which uses the Android NDK. To move data from Java to C++ you can use [JNI](https://developer.android.com/training/articles/perf-jni).
|
||||
|
||||
If you're generating audio data in Java or Kotlin you should consider whether the reduced latency which Oboe gives you (particularly on high-end devices) is worth the extra complexity of passing data via JNI. An alternative is to use [Java AudioTrack](https://developer.android.com/reference/android/media/AudioTrack). This can be created with low latency using the AudioTrack.Builder method [`setPerformanceMode(AudioTrack.PERFORMANCE_MODE_LOW_LATENCY)`](https://developer.android.com/reference/android/media/AudioTrack#PERFORMANCE_MODE_LOW_LATENCY).
|
||||
|
||||
You can dynamically tune the latency of the stream just like in Oboe using [`setBufferSizeInFrames(int)`](https://developer.android.com/reference/android/media/AudioTrack.html#setBufferSizeInFrames(int))
|
||||
Also you can use blocking writes with the Java AudioTrack and still get a low latency stream.
|
||||
Oboe requires a data callback to get a low latency stream and that does not work well with Java.
|
||||
|
||||
Note that [`AudioTrack.PERFORMANCE_MODE_LOW_LATENCY`](https://developer.android.com/reference/android/media/AudioTrack#PERFORMANCE_MODE_LOW_LATENCY) was added in API 26, For API 24 or 25 use [`AudioAttributes.FLAG_LOW_LATENCY`](https://developer.android.com/reference/kotlin/android/media/AudioAttributes#flag_low_latency). That was deprecated but will still work with later APIs.
|
||||
|
||||
## Can I use Oboe to play compressed audio files, such as MP3 or AAC?
|
||||
Oboe only works with PCM data. It does not include any extraction or decoding classes. However, the [RhythmGame sample](https://github.com/google/oboe/tree/main/samples/RhythmGame) includes extractors for both NDK and FFmpeg.
|
||||
|
||||
For more information on using FFmpeg in your app [check out this article](https://medium.com/@donturner/using-ffmpeg-for-faster-audio-decoding-967894e94e71).
|
||||
|
||||
## Android Studio doesn't find the Oboe symbols, how can I fix this?
|
||||
Start by ensuring that your project builds successfully. The main thing to do is ensure that the Oboe include paths are set correctly in your project's `CMakeLists.txt`. [Full instructions here](https://github.com/google/oboe/blob/main/docs/GettingStarted.md#2-update-cmakeliststxt).
|
||||
|
||||
If that doesn't fix it try the following:
|
||||
|
||||
1) Invalidate the Android Studio cache by going to File->Invalidate Caches / Restart
|
||||
2) Delete the contents of `$HOME/Library/Caches/AndroidStudio<version>`
|
||||
|
||||
We have had several reports of this happening and are keen to understand the root cause. If this happens to you please file an issue with your Android Studio version and we'll investigate further.
|
||||
|
||||
## I requested a stream with `PerformanceMode::LowLatency`, but didn't get it. Why not?
|
||||
Usually if you call `builder.setPerformanceMode(PerformanceMode::LowLatency)` and don't specify other stream properties you will get a `LowLatency` stream. The most common reasons for not receiving one are:
|
||||
|
||||
- You are opening an output stream and did not specify a **data callback**.
|
||||
- You requested a **sample** rate which does not match the audio device's native sample rate. For playback streams, this means the audio data you write into the stream must be resampled before it's sent to the audio device. For recording streams, the audio data must be resampled before you can read it. In both cases the resampling process (performed by the Android audio framework) adds latency and therefore providing a `LowLatency` stream is not possible. To avoid the resampler on API 26 and below you can specify a default value for the sample rate [as detailed here](https://github.com/google/oboe/blob/main/docs/GettingStarted.md#obtaining-optimal-latency). Or you can enable sample rate conversion by calling [AudioStreamBuilder::setSampleRateConversionQuality()](https://google.github.io/oboe/classoboe_1_1_audio_stream_builder.html#a0c98d21da654da6d197b004d29d8499c) in Oboe, which allows the lower level code to run at the optimal rate and provide lower latency.
|
||||
- If you request **AudioFormat::Float on an Input** stream before Android 9.0 then you will **not** get a FAST track. You need to either request AudioFormat::Int16 or enable format conversion by calling [AudioStreamBuilder::setFormatConversionAllowed()](https://google.github.io/oboe/classoboe_1_1_audio_stream_builder.html#aa30150d2d0b3c925b545646962dffca0) in Oboe.
|
||||
- The audio **device** does not support `LowLatency` streams, for example Bluetooth.
|
||||
- You requested a **channel count** which is not supported natively by the audio device. On most devices and Android API levels it is possible to obtain a `LowLatency` stream for both mono and stereo, however, there are a few exceptions, some of which are listed [here](https://github.com/google/oboe/blob/main/docs/AndroidAudioHistory.md).
|
||||
- The **maximum number** of `LowLatency` streams has been reached. This could be by your app, or by other apps. This is often caused by opening multiple playback streams for different "tracks". To avoid this open a single audio stream and perform
|
||||
your own mixing in the app.
|
||||
- You are on Android 7.0 or below and are receiving `PerformanceMode::None`. The ability to query the performance mode of a stream was added in Android 7.1 (Nougat MR1). Low latency streams (aka FAST tracks) _are available_ on Android 7.0 and below but there is no programmatic way of knowing whether yours is one. [Question on StackOverflow](https://stackoverflow.com/questions/56828501/does-opensl-es-support-performancemodelowlatency/5683499)
|
||||
|
||||
## My question isn't listed, where can I ask it?
|
||||
Please ask questions on [Stack Overflow](https://stackoverflow.com/questions/ask) with the [Oboe tag](https://stackoverflow.com/tags/oboe) or in the GitHub Issues tab.
|
578
externals/oboe/docs/FullGuide.md
vendored
Normal file
578
externals/oboe/docs/FullGuide.md
vendored
Normal file
|
@ -0,0 +1,578 @@
|
|||
# Full Guide To Oboe
|
||||
Oboe is a C++ library which makes it easy to build high-performance audio apps on Android. Apps communicate with Oboe by reading and writing data to streams.
|
||||
|
||||
## Audio streams
|
||||
|
||||
Oboe moves audio data between your app and the audio inputs and outputs on your Android device. Your app passes data in and out using a callback function or by reading from and writing to *audio streams*, represented by the class `AudioStream`. The read/write calls can be blocking or non-blocking.
|
||||
|
||||
A stream is defined by the following:
|
||||
|
||||
* The *audio* *device* that is the source or sink for the data in the stream.
|
||||
* The *sharing mode* that determines whether a stream has exclusive access to an audio device that might otherwise be shared among multiple streams.
|
||||
* The *format* of the audio data in the stream.
|
||||
|
||||
### Audio device
|
||||
|
||||
Each stream is attached to a single audio device.
|
||||
|
||||
An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. Don't confuse an *audio device*
|
||||
(a built-in mic or bluetooth headset) with the *Android device* (the phone or watch) that is running your app.
|
||||
|
||||
On API 23 and above you can use the `AudioManager` method [getDevices()](https://developer.android.com/reference/android/media/AudioManager.html#getDevices(int)) to discover the audio devices that are available on your Android device. The method returns information about the [type](https://developer.android.com/reference/android/media/AudioDeviceInfo.html) of each device.
|
||||
|
||||
Each audio device has a unique ID on the Android device. You can use the ID to bind an audio stream to a specific audio device. However, in most cases you can let Oboe choose the default primary device rather than specifying one yourself.
|
||||
|
||||
The audio device attached to a stream determines whether the stream is for input or output. A stream can only move data in one direction. When you define a stream you also set its direction. When you open a stream Android checks to ensure that the audio device and stream direction agree.
|
||||
|
||||
### Sharing mode
|
||||
|
||||
A stream has a sharing mode:
|
||||
|
||||
* `SharingMode::Exclusive` (available on API 26+) means the stream has exclusive access to an endpoint on its audio device; the endpoint cannot be used by any other audio stream. If the exclusive endpoint is already in use, it might not be possible for the stream to obtain access to it. Exclusive streams provide the lowest possible latency by bypassing the mixer stage, but they are also more likely to get disconnected. You should close exclusive streams as soon as you no longer need them, so that other apps can access that endpoint. Not all audio devices provide exclusive endpoints. System sounds and sounds from other apps can still be heard when an exclusive stream is in use as they use a different endpoint.
|
||||
|
||||

|
||||
|
||||
* `SharingMode::Shared` allows Oboe streams to share an endpoint. The operating system will mix all the shared streams assigned to the same endpoint on the audio device.
|
||||
|
||||

|
||||
|
||||
|
||||
You can explicitly request the sharing mode when you create a stream, although you are not guaranteed to receive that mode. By default, the sharing mode is `Shared`.
|
||||
|
||||
### Audio format
|
||||
|
||||
The data passed through a stream has the usual digital audio attributes, which you must specify when you define a stream. These are as follows:
|
||||
|
||||
* Sample format
|
||||
* Samples per frame
|
||||
* Sample rate
|
||||
|
||||
Oboe permits these sample formats:
|
||||
|
||||
| AudioFormat | C data type | Notes |
|
||||
| :------------ | :---------- | :---- |
|
||||
| I16 | int16_t | common 16-bit samples, [Q0.15 format](https://source.android.com/devices/audio/data_formats#androidFormats) |
|
||||
| Float | float | -1.0 to +1.0 |
|
||||
| I24 | N/A | 24-bit samples packed into 3 bytes, [Q0.23 format](https://source.android.com/devices/audio/data_formats#androidFormats). Added in API 31 |
|
||||
| I32 | int32_t | common 32-bit samples, [Q0.31 format](https://source.android.com/devices/audio/data_formats#androidFormats). Added in API 31 |
|
||||
| IEC61937 | N/A | compressed audio wrapped in IEC61937 for HDMI or S/PDIF passthrough. Added in API 34 |
|
||||
| MP3 | N/A | compressed audio format in MP3 format. Added in API36 |
|
||||
| AAC_LC | N/A | compressed audio format in AAC LC format. Added in API 36 |
|
||||
| AAC_HE_V1 | N/A | compressed audio format in AAC HE V1 format. Added in API 36 |
|
||||
| AAC_HE_V2 | N/A | compressed audio format in AAC HE V2 format. Added in API 36 |
|
||||
| AAC_ELD | N/A | compressed audio format in AAC ELD format. Added in API 36 |
|
||||
| AAC_XHE | N/A | compressed audio format in AAC XHE format. Added in API 36 |
|
||||
| OPUS | N/A | compressed audio format in OPUS format. Added in API 36 |
|
||||
|
||||
Oboe might perform sample conversion on its own. For example, if an app is writing AudioFormat::Float data but the HAL uses AudioFormat::I16, Oboe might convert the samples automatically. Conversion can happen in either direction. If your app processes audio input, it is wise to verify the input format and be prepared to convert data if necessary, as in this example:
|
||||
|
||||
AudioFormat dataFormat = stream->getDataFormat();
|
||||
//... later
|
||||
if (dataFormat == AudioFormat::I16) {
|
||||
convertFloatToPcm16(...)
|
||||
}
|
||||
|
||||
## Creating an audio stream
|
||||
|
||||
The Oboe library follows a [builder design pattern](https://en.wikipedia.org/wiki/Builder_pattern) and provides the class `AudioStreamBuilder`.
|
||||
|
||||
### Set the audio stream configuration using an AudioStreamBuilder.
|
||||
|
||||
Use the builder functions that correspond to the stream parameters. These optional set functions are available:
|
||||
|
||||
AudioStreamBuilder streamBuilder;
|
||||
|
||||
streamBuilder.setDeviceId(deviceId);
|
||||
streamBuilder.setDirection(direction);
|
||||
streamBuilder.setSharingMode(shareMode);
|
||||
streamBuilder.setSampleRate(sampleRate);
|
||||
streamBuilder.setChannelCount(channelCount);
|
||||
streamBuilder.setFormat(format);
|
||||
streamBuilder.setPerformanceMode(perfMode);
|
||||
|
||||
Note that these methods do not report errors, such as an undefined constant or value out of range. They will be checked when the stream is opened.
|
||||
|
||||
If you do not specify the deviceId, the default is the primary output device.
|
||||
If you do not specify the stream direction, the default is an output stream.
|
||||
For all parameters, you can explicitly set a value, or let the system
|
||||
assign the optimal value by not specifying the parameter at all or setting
|
||||
it to `kUnspecified`.
|
||||
|
||||
To be safe, check the state of the audio stream after you create it, as explained in step 3, below.
|
||||
|
||||
### Open the Stream
|
||||
|
||||
Declare a **shared pointer** for the stream. Make sure it is declared with the appropriate scope. The best place is as a member variable in a managing class or as a global. Avoid declaring it as a local variable because the stream may get deleted when the function returns.
|
||||
|
||||
std::shared_ptr<oboe::AudioStream> mStream;
|
||||
|
||||
After you've configured the `AudioStreamBuilder`, call `openStream()` to open the stream:
|
||||
|
||||
Result result = streamBuilder.openStream(mStream);
|
||||
if (result != OK){
|
||||
__android_log_print(ANDROID_LOG_ERROR,
|
||||
"AudioEngine",
|
||||
"Error opening stream %s",
|
||||
convertToText(result));
|
||||
}
|
||||
|
||||
|
||||
### Verifying stream configuration and additional properties
|
||||
|
||||
You should verify the stream's configuration after opening it.
|
||||
|
||||
The following properties are guaranteed to be set. However, if these properties
|
||||
are unspecified, a default value will still be set, and should be queried by the
|
||||
appropriate accessor.
|
||||
|
||||
* framesPerDataCallback
|
||||
* sampleRate
|
||||
* channelCount
|
||||
* format
|
||||
* direction
|
||||
|
||||
The following properties may be changed by the underlying stream construction
|
||||
*even if explicitly set* and therefore should always be queried by the appropriate
|
||||
accessor. The property settings will depend on device capabilities.
|
||||
|
||||
* bufferCapacityInFrames
|
||||
* sharingMode (exclusive provides lowest latency)
|
||||
* performanceMode
|
||||
|
||||
The following properties are only set by the underlying stream. They cannot be
|
||||
set by the application, but should be queried by the appropriate accessor.
|
||||
|
||||
* framesPerBurst
|
||||
|
||||
The following properties have unusual behavior
|
||||
|
||||
* deviceId is respected when the underlying API is AAudio (API level >=28), but not when it
|
||||
is OpenSLES. It can be set regardless, but *will not* throw an error if an OpenSLES stream
|
||||
is used. The default device will be used, rather than whatever is specified.
|
||||
|
||||
* mAudioApi is only a property of the builder, however
|
||||
AudioStream::getAudioApi() can be used to query the underlying API which the
|
||||
stream uses. The property set in the builder is not guaranteed, and in
|
||||
general, the API should be chosen by Oboe to allow for best performance and
|
||||
stability considerations. Since Oboe is designed to be as uniform across both
|
||||
APIs as possible, this property should not generally be needed.
|
||||
|
||||
* mBufferSizeInFrames can only be set on an already open stream (as opposed to a
|
||||
builder), since it depends on run-time behavior.
|
||||
The actual size used may not be what was requested.
|
||||
Oboe or the underlyng API will limit the size between zero and the buffer capacity.
|
||||
It may also be limited further to reduce glitching on particular devices.
|
||||
This feature is not supported when using a callback with OpenSL ES.
|
||||
|
||||
The following properties are helpful for older devices to achieve optimal results.
|
||||
|
||||
* `setChannelConversionAllowed()` enables channel conversions. This is false by default.
|
||||
* `setFormatConversionAllowed()` enables format conversions. This is false by default.
|
||||
* `setSampleRateConversionQuality()` enables sample rate conversions.
|
||||
This defaults to SampleRateConversionQuality::Medium.
|
||||
|
||||
Many of the stream's properties may vary (whether or not you set
|
||||
them) depending on the capabilities of the audio device and the Android device on
|
||||
which it's running. If you need to know these values then you must query them using
|
||||
the accessor after the stream has been opened. Additionally,
|
||||
the underlying parameters a stream is granted are useful to know if
|
||||
they have been left unspecified. As a matter of good defensive programming, you
|
||||
should check the stream's configuration before using it.
|
||||
|
||||
|
||||
There are functions to retrieve the stream setting that corresponds to each
|
||||
builder setting:
|
||||
|
||||
|
||||
| AudioStreamBuilder set methods | AudioStream get methods |
|
||||
| :------------------------ | :----------------- |
|
||||
| `setDataCallback()` | `getDataCallback()` |
|
||||
| `setErrorCallback()` | `getErrorCallback()` |
|
||||
| `setDirection()` | `getDirection()` |
|
||||
| `setSharingMode()` | `getSharingMode()` |
|
||||
| `setPerformanceMode()` | `getPerformanceMode()` |
|
||||
| `setSampleRate()` | `getSampleRate()` |
|
||||
| `setChannelCount()` | `getChannelCount()` |
|
||||
| `setFormat()` | `getFormat()` |
|
||||
| `setBufferCapacityInFrames()` | `getBufferCapacityInFrames()` |
|
||||
| `setFramesPerDataCallback()` | `getFramesPerDataCallback()` |
|
||||
| -- | `getFramesPerBurst()` |
|
||||
| `setDeviceId()` (not respected on OpenSLES) | `getDeviceId()` |
|
||||
| `setAudioApi()` (mainly for debugging) | `getAudioApi()` |
|
||||
| `setChannelConversionAllowed()` | `isChannelConversionAllowed()` |
|
||||
| `setFormatConversionAllowed()` | `setFormatConversionAllowed()` |
|
||||
| `setSampleRateConversionQuality` | `getSampleRateConversionQuality()` |
|
||||
|
||||
### AAudio specific AudioStreamBuilder fields
|
||||
|
||||
Some AudioStreamBuilder fields are only applied to AAudio
|
||||
|
||||
The following AudioStreamBuilder fields were added in API 28 to
|
||||
specify additional information about the AudioStream to the device. Currently,
|
||||
they have little effect on the stream, but setting them helps applications
|
||||
interact better with other services.
|
||||
|
||||
For more information see: [Usage/ContentTypes](https://source.android.com/devices/audio/attributes).
|
||||
The InputPreset may be used by the device to process the input stream (such as gain control). By default
|
||||
it is set to VoiceRecognition, which is optimized for low latency.
|
||||
|
||||
* `setUsage(oboe::Usage usage)` - The purpose for creating the stream.
|
||||
* `setContentType(oboe::ContentType contentType)` - The type of content carried
|
||||
by the stream.
|
||||
* `setInputPreset(oboe::InputPreset inputPreset)` - The recording configuration
|
||||
for an audio input.
|
||||
* `setSessionId(oboe::SessionId sessionId)` - Allocate SessionID to connect to the
|
||||
Java AudioEffects API.
|
||||
|
||||
In API 29, `setAllowedCapturePolicy(oboe::AllowedCapturePolicy allowedCapturePolicy)` was added.
|
||||
This specifies whether this stream audio may or may not be captured by other apps or the system.
|
||||
|
||||
In API 30, `setPrivacySensitiveMode(oboe::PrivacySensitiveMode privacySensitiveMode)` was added.
|
||||
Concurrent capture is not permitted for privacy sensitive input streams.
|
||||
|
||||
In API 31, the following APIs were added:
|
||||
* `setPackageName(std::string packageName)` - Declare the name of the package creating the stream.
|
||||
The default, if you do not call this function, is a random package in the calling uid.
|
||||
* `setAttributionTag(std::string attributionTag)` - Declare the attribution tag of the context creating the stream.
|
||||
Attribution can be used in complex apps to logically separate parts of the app.
|
||||
|
||||
In API 32, the following APIs were added:
|
||||
* `setIsContentSpatialized(bool isContentSpatialized)` - Marks that the content is already spatialized
|
||||
to prevent double-processing.
|
||||
* `setSpatializationBehavior(oboe::SpatializationBehavior spatializationBehavior)` - Marks what the default
|
||||
spatialization behavior should be.
|
||||
* `setChannelMask(oboe::ChannelMask)` - Requests a specific channel mask. The number of channels may be
|
||||
different than setChannelCount. The last called will be respected if this function and setChannelCount()
|
||||
are called.
|
||||
|
||||
In API 34, the following APIs were added to streams to get properties of the hardware.
|
||||
* `getHardwareChannelCount()`
|
||||
* `getHardwareSampleRate()`
|
||||
* `getHardwareFormat()`
|
||||
|
||||
|
||||
## Using an audio stream
|
||||
|
||||
### State transitions
|
||||
|
||||
An Oboe stream is usually in one of five stable states (the error state, Disconnected, is described at the end of this section):
|
||||
|
||||
* Open
|
||||
* Started
|
||||
* Paused
|
||||
* Flushed
|
||||
* Stopped
|
||||
|
||||
Data only flows through a stream when the stream is in the Started state. To
|
||||
move a stream between states, use one of the functions that request a state
|
||||
transition:
|
||||
|
||||
Result result;
|
||||
result = stream->requestStart();
|
||||
result = stream->requestStop();
|
||||
result = stream->requestPause();
|
||||
result = stream->requestFlush();
|
||||
|
||||
Note that you can only request pause or flush on an output stream:
|
||||
|
||||
These functions are asynchronous, and the state change doesn't happen
|
||||
immediately. When you request a state change, the stream moves to one of the
|
||||
corresponding transient states:
|
||||
|
||||
* Starting
|
||||
* Pausing
|
||||
* Flushing
|
||||
* Stopping
|
||||
* Closing
|
||||
|
||||
The state diagram below shows the stable states as rounded rectangles, and the transient states as dotted rectangles.
|
||||
Though it's not shown, you can call `close()` from any state
|
||||
|
||||

|
||||
|
||||
Oboe doesn't provide callbacks to alert you to state changes. One special
|
||||
function,
|
||||
`AudioStream::waitForStateChange()` can be used to wait for a state change.
|
||||
Note that most apps will not need to call `waitForStateChange()` and can just
|
||||
request state changes whenever they are needed.
|
||||
|
||||
The function does not detect a state change on its own, and does not wait for a
|
||||
specific state. It waits until the current state
|
||||
is *different* than `inputState`, which you specify.
|
||||
|
||||
For example, after requesting to pause, a stream should immediately enter
|
||||
the transient state Pausing, and arrive sometime later at the Paused state - though there's no guarantee it will.
|
||||
Since you can't wait for the Paused state, use `waitForStateChange()` to wait for *any state
|
||||
other than Pausing*. Here's how that's done:
|
||||
|
||||
```
|
||||
StreamState inputState = StreamState::Pausing;
|
||||
StreamState nextState = StreamState::Uninitialized;
|
||||
int64_t timeoutNanos = 100 * kNanosPerMillisecond;
|
||||
result = stream->requestPause();
|
||||
result = stream->waitForStateChange(inputState, &nextState, timeoutNanos);
|
||||
```
|
||||
|
||||
|
||||
If the stream's state is not Pausing (the `inputState`, which we assumed was the
|
||||
current state at call time), the function returns immediately. Otherwise, it
|
||||
blocks until the state is no longer Pausing or the timeout expires. When the
|
||||
function returns, the parameter `nextState` shows the current state of the
|
||||
stream.
|
||||
|
||||
You can use this same technique after calling request start, stop, or flush,
|
||||
using the corresponding transient state as the inputState. Do not call
|
||||
`waitForStateChange()` after calling `AudioStream::close()` since the underlying stream resources
|
||||
will be deleted as soon as it closes. And do not call `close()`
|
||||
while `waitForStateChange()` is running in another thread.
|
||||
|
||||
### Reading and writing to an audio stream
|
||||
|
||||
There are two ways to move data in or out of a stream.
|
||||
1) Read from or write directly to the stream.
|
||||
2) Specify a data callback object that will get called when the stream is ready.
|
||||
|
||||
The callback technique offers the lowest latency performance because the callback code can run in a high priority thread.
|
||||
Also, attempting to open a low latency output stream without an audio callback (with the intent to use writes)
|
||||
may result in a non low latency stream.
|
||||
|
||||
The read/write technique may be easier when you do not need low latency. Or, when doing both input and output, it is common to use a callback for output and then just do a non-blocking read from the input stream. Then you have both the input and output data available in one high priority thread.
|
||||
|
||||
After the stream is started you can read or write to it using the methods
|
||||
`AudioStream::read(buffer, numFrames, timeoutNanos)`
|
||||
and
|
||||
`AudioStream::write(buffer, numFrames, timeoutNanos)`.
|
||||
|
||||
For a blocking read or write that transfers the specified number of frames, set timeoutNanos greater than zero. For a non-blocking call, set timeoutNanos to zero. In this case the result is the actual number of frames transferred.
|
||||
|
||||
When you read input, you should verify the correct number of
|
||||
frames was read. If not, the buffer might contain unknown data that could cause an
|
||||
audio glitch. You can pad the buffer with zeros to create a
|
||||
silent dropout:
|
||||
|
||||
Result result = mStream->read(audioData, numFrames, timeout);
|
||||
if (result < 0) {
|
||||
// Error!
|
||||
}
|
||||
if (result != numFrames) {
|
||||
// pad the buffer with zeros
|
||||
memset(static_cast<sample_type*>(audioData) + result * samplesPerFrame, 0,
|
||||
(numFrames - result) * mStream->getBytesPerFrame());
|
||||
}
|
||||
|
||||
You can prime the stream's buffer before starting the stream by writing data or silence into it. This must be done in a non-blocking call with timeoutNanos set to zero.
|
||||
|
||||
The data in the buffer must match the data format returned by `mStream->getDataFormat()`.
|
||||
|
||||
### Closing an audio stream
|
||||
|
||||
When you are finished using a stream, close it:
|
||||
|
||||
stream->close();
|
||||
|
||||
Do not close a stream while it is being written to or read from another thread as this will cause your app to crash. After you close a stream you should not call any of its methods except for quering it properties.
|
||||
|
||||
### Disconnected audio stream
|
||||
|
||||
An audio stream can become disconnected at any time if one of these events happens:
|
||||
|
||||
* The associated audio device is no longer connected (for example when headphones are unplugged).
|
||||
* An error occurs internally.
|
||||
* An audio device is no longer the primary audio device.
|
||||
|
||||
When a stream is disconnected, it has the state "Disconnected" and calls to `write()` or other functions will return `Result::ErrorDisconnected`. When a stream is disconnected, all you can do is close it.
|
||||
|
||||
If you need to be informed when an audio device is disconnected, write a class
|
||||
which extends `AudioStreamErrorCallback` and then register your class using `builder.setErrorCallback(yourCallbackClass)`. It is recommended to pass a shared_ptr.
|
||||
If you register a callback, then it will automatically close the stream in a separate thread if the stream is disconnected.
|
||||
|
||||
Note that error callbacks will only be called when a data callback has been specified
|
||||
and the stream is started. If you are not using a data callback then the read(), write()
|
||||
and requestStart() methods will return errors if the stream is disconnected.
|
||||
|
||||
Your error callback can implement the following methods (called in a separate thread):
|
||||
|
||||
* `onErrorBeforeClose(stream, error)` - called when the stream has been disconnected but not yet closed,
|
||||
so you can still reference the underlying stream (e.g.`getXRunCount()`).
|
||||
You can also inform any other threads that may be calling the stream to stop doing so.
|
||||
Do not delete the stream or modify its stream state in this callback.
|
||||
* `onErrorAfterClose(stream, error)` - called when the stream has been stopped and closed by Oboe so the stream cannot be used and calling getState() will return closed.
|
||||
During this callback, stream properties (those requested by the builder) can be queried, as well as frames written and read.
|
||||
The stream can be deleted at the end of this method (as long as it not referenced in other threads).
|
||||
Methods that reference the underlying stream should not be called (e.g. `getTimestamp()`, `getXRunCount()`, `read()`, `write()`, etc.).
|
||||
Opening a separate stream is also a valid use of this callback, especially if the error received is `Error::Disconnected`.
|
||||
However, it is important to note that the new audio device may have vastly different properties than the stream that was disconnected.
|
||||
|
||||
See the SoundBoard sample for an example of setErrorCallback.
|
||||
|
||||
## Optimizing performance
|
||||
|
||||
You can optimize the performance of an audio application by using special high-priority threads.
|
||||
|
||||
### Using a high priority data callback
|
||||
|
||||
If your app reads or writes audio data from an ordinary thread, it may be preempted or experience timing jitter. This can cause audio glitches.
|
||||
Using larger buffers might guard against such glitches, but a large buffer also introduces longer audio latency.
|
||||
For applications that require low latency, an audio stream can use an asynchronous callback function to transfer data to and from your app.
|
||||
The callback runs in a high-priority thread that has better performance.
|
||||
|
||||
Your code can access the callback mechanism by implementing the virtual class
|
||||
`AudioStreamDataCallback`. The stream periodically executes `onAudioReady()` (the
|
||||
callback function) to acquire the data for its next burst.
|
||||
|
||||
The total number of samples that you need to fill is numFrames * numChannels.
|
||||
|
||||
class AudioEngine : AudioStreamDataCallback {
|
||||
public:
|
||||
DataCallbackResult AudioEngine::onAudioReady(
|
||||
AudioStream *oboeStream,
|
||||
void *audioData,
|
||||
int32_t numFrames){
|
||||
// Fill the output buffer with random white noise.
|
||||
const int numChannels = AAudioStream_getChannelCount(stream);
|
||||
// This code assumes the format is AAUDIO_FORMAT_PCM_FLOAT.
|
||||
float *output = (float *)audioData;
|
||||
for (int frameIndex = 0; frameIndex < numFrames; frameIndex++) {
|
||||
for (int channelIndex = 0; channelIndex < numChannels; channelIndex++) {
|
||||
float noise = (float)(drand48() - 0.5);
|
||||
*output++ = noise;
|
||||
}
|
||||
}
|
||||
return DataCallbackResult::Continue;
|
||||
}
|
||||
|
||||
bool AudioEngine::start() {
|
||||
...
|
||||
// register the callback
|
||||
streamBuilder.setDataCallback(this);
|
||||
}
|
||||
private:
|
||||
// application data goes here
|
||||
}
|
||||
|
||||
|
||||
Note that the callback must be registered on the stream with `setDataCallback`. Any
|
||||
application-specific data can be included within the class itself.
|
||||
|
||||
The callback function should not perform a read or write on the stream that invoked it. If the callback belongs to an input stream, your code should process the data that is supplied in the audioData buffer (specified as the second argument). If the callback belongs to an output stream, your code should place data into the buffer.
|
||||
|
||||
It is possible to process more than one stream in the callback. You can use one stream as the master, and pass pointers to other streams in the class's private data. Register a callback for the master stream. Then use non-blocking I/O on the other streams. Here is an example of a round-trip callback that passes an input stream to an output stream. The master calling stream is the output
|
||||
stream. The input stream is included in the class.
|
||||
|
||||
The callback does a non-blocking read from the input stream placing the data into the buffer of the output stream.
|
||||
|
||||
class AudioEngine : AudioStreamDataCallback {
|
||||
public:
|
||||
|
||||
DataCallbackResult AudioEngine::onAudioReady(
|
||||
AudioStream *oboeStream,
|
||||
void *audioData,
|
||||
int32_t numFrames) {
|
||||
const int64_t timeoutNanos = 0; // for a non-blocking read
|
||||
auto result = recordingStream->read(audioData, numFrames, timeoutNanos);
|
||||
// result has type ResultWithValue<int32_t>, which for convenience is coerced
|
||||
// to a Result type when compared with another Result.
|
||||
if (result == Result::OK) {
|
||||
if (result.value() < numFrames) {
|
||||
// replace the missing data with silence
|
||||
memset(static_cast<sample_type*>(audioData) + result.value() * samplesPerFrame, 0,
|
||||
(numFrames - result.value()) * oboeStream->getBytesPerFrame());
|
||||
|
||||
}
|
||||
return DataCallbackResult::Continue;
|
||||
}
|
||||
return DataCallbackResult::Stop;
|
||||
}
|
||||
|
||||
bool AudioEngine::start() {
|
||||
...
|
||||
streamBuilder.setDataCallback(this);
|
||||
}
|
||||
|
||||
void setRecordingStream(AudioStream *stream) {
|
||||
recordingStream = stream;
|
||||
}
|
||||
|
||||
private:
|
||||
AudioStream *recordingStream;
|
||||
}
|
||||
|
||||
|
||||
Note that in this example it is assumed the input and output streams have the same number of channels, format and sample rate. The format of the streams can be mismatched - as long as the code handles the translations properly.
|
||||
|
||||
#### Data Callback - Do's and Don'ts
|
||||
You should never perform an operation which could block inside `onAudioReady`. Examples of blocking operations include:
|
||||
|
||||
- allocate memory using, for example, malloc() or new
|
||||
- file operations such as opening, closing, reading or writing
|
||||
- network operations such as streaming
|
||||
- use mutexes or other synchronization primitives
|
||||
- sleep
|
||||
- stop or close the stream
|
||||
- Call read() or write() on the stream which invoked it
|
||||
|
||||
The following methods are OK to call:
|
||||
|
||||
- AudioStream::get*()
|
||||
- oboe::convertResultToText()
|
||||
|
||||
### Setting performance mode
|
||||
|
||||
Every AudioStream has a *performance mode* which has a large effect on your app's behavior. There are three modes:
|
||||
|
||||
* `PerformanceMode::None` is the default mode. It uses a basic stream that balances latency and power savings.
|
||||
* `PerformanceMode::LowLatency` uses smaller buffers and an optimized data path for reduced latency.
|
||||
* `PerformanceMode::PowerSaving` uses larger internal buffers and a data path that trades off latency for lower power.
|
||||
|
||||
You can select the performance mode by calling `setPerformanceMode()`,
|
||||
and discover the current mode by calling `getPerformanceMode()`.
|
||||
|
||||
If low latency is more important than power savings in your application, use `PerformanceMode::LowLatency`.
|
||||
This is useful for apps that are very interactive, such as games or keyboard synthesizers.
|
||||
|
||||
If saving power is more important than low latency in your application, use `PerformanceMode::PowerSaving`.
|
||||
This is typical for apps that play back previously generated music, such as streaming audio or MIDI file players.
|
||||
|
||||
In the current version of Oboe, in order to achieve the lowest possible latency you must use the `PerformanceMode::LowLatency` performance mode along with a high-priority data callback. Follow this example:
|
||||
|
||||
```
|
||||
// Create a callback object
|
||||
MyOboeStreamCallback myCallback;
|
||||
|
||||
// Create a stream builder
|
||||
AudioStreamBuilder builder;
|
||||
builder.setDataCallback(myCallback);
|
||||
builder.setPerformanceMode(PerformanceMode::LowLatency);
|
||||
```
|
||||
|
||||
## Thread safety
|
||||
|
||||
The Oboe API is not completely [thread safe](https://en.wikipedia.org/wiki/Thread_safety).
|
||||
You cannot call some of the Oboe functions concurrently from more than one thread at a time.
|
||||
This is because Oboe avoids using mutexes, which can cause thread preemption and glitches.
|
||||
|
||||
To be safe, don't call `waitForStateChange()` or read or write to the same stream from two different threads. Similarly, don't close a stream in one thread while reading or writing to it in another thread.
|
||||
|
||||
Calls that return stream settings, like `AudioStream::getSampleRate()` and `AudioStream::getChannelCount()`, are thread safe.
|
||||
|
||||
These calls are also thread safe:
|
||||
|
||||
* `convertToText()`
|
||||
* `AudioStream::get*()` except for `getTimestamp()` and `getState()`
|
||||
|
||||
<b>Note:</b> When a stream uses an error callback, it's safe to read/write from the callback thread while also closing the stream from the thread in which it is running.
|
||||
|
||||
|
||||
## Code samples
|
||||
|
||||
Code samples are available in the [samples folder](../samples).
|
||||
|
||||
## Known Issues
|
||||
|
||||
The following methods are defined, but will return `Result::ErrorUnimplemented` for OpenSLES streams:
|
||||
|
||||
* `getFramesRead()`
|
||||
* `getFramesWritten()`
|
||||
* `getTimestamp()`
|
||||
|
||||
Additionally, `setDeviceId()` will not be respected by OpenSLES streams.
|
376
externals/oboe/docs/GettingStarted.md
vendored
Normal file
376
externals/oboe/docs/GettingStarted.md
vendored
Normal file
|
@ -0,0 +1,376 @@
|
|||
# Adding Oboe to your project
|
||||
|
||||
Oboe is a C++ library. So your Android Studio project will need to [support native C++ code](https://developer.android.com/studio/projects/add-native-code).
|
||||
|
||||
There are two ways use Oboe in your Android Studio project:
|
||||
|
||||
1) **Use the Oboe pre-built library binaries and headers**. Use this approach if you just want to use a stable version of the Oboe library in your project.
|
||||
|
||||
or
|
||||
|
||||
2) **Build Oboe from source.** Use this approach if you would like to debug or make changes to the Oboe source code and contribute back to the project.
|
||||
|
||||
## Option 1) Using pre-built binaries and headers
|
||||
|
||||
Oboe is distributed as a [prefab](https://github.com/google/prefab) package via [Google Maven](https://maven.google.com/web/index.html) (search for "oboe"). [Prefab support was added](https://android-developers.googleblog.com/2020/02/native-dependencies-in-android-studio-40.html) to [Android Studio 4.0](https://developer.android.com/studio) so you'll need to be using this version of Android Studio or above.
|
||||
|
||||
Add the oboe dependency to your app's `build.gradle` file. Replace "X.X.X" with the [latest stable version](https://github.com/google/oboe/releases/) of Oboe:
|
||||
|
||||
dependencies {
|
||||
implementation 'com.google.oboe:oboe:X.X.X'
|
||||
}
|
||||
|
||||
For `build.gradle.kts` add parentheses:
|
||||
|
||||
implementation("com.google.oboe:oboe:X.X.X")
|
||||
|
||||
Also enable prefab by adding:
|
||||
|
||||
android {
|
||||
buildFeatures {
|
||||
prefab true
|
||||
}
|
||||
}
|
||||
|
||||
For `build.gradle.kts` add an equal sign:
|
||||
|
||||
prefab = true
|
||||
|
||||
Include and link to oboe by updating your `CMakeLists.txt`:
|
||||
|
||||
find_package (oboe REQUIRED CONFIG)
|
||||
target_link_libraries(native-lib oboe::oboe) # You may have other libraries here such as `log`.
|
||||
|
||||
Here's a complete example `CMakeLists.txt` file:
|
||||
|
||||
cmake_minimum_required(VERSION 3.4.1)
|
||||
|
||||
# Build our own native library
|
||||
add_library (native-lib SHARED native-lib.cpp )
|
||||
|
||||
# Find the Oboe package
|
||||
find_package (oboe REQUIRED CONFIG)
|
||||
|
||||
# Specify the libraries which our native library is dependent on, including Oboe
|
||||
target_link_libraries(native-lib log oboe::oboe)
|
||||
|
||||
Configure your app to use the shared STL by updating your `app/build.gradle`:
|
||||
|
||||
android {
|
||||
defaultConfig {
|
||||
externalNativeBuild {
|
||||
cmake {
|
||||
arguments "-DANDROID_STL=c++_shared"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
For `app/build.gradle.kts` add parentheses:
|
||||
|
||||
arguments("-DANDROID_STL=c++_shared")
|
||||
|
||||
## Option 2) Building from source
|
||||
|
||||
### 1. Clone the github repository
|
||||
Start by cloning the [latest stable release](https://github.com/google/oboe/releases/) of the Oboe repository, for example:
|
||||
|
||||
git clone -b 1.6-stable https://github.com/google/oboe
|
||||
|
||||
**Make a note of the path which you cloned oboe into - you will need it shortly**
|
||||
|
||||
If you use git as your version control system, consider adding Oboe as a [submodule](https://gist.github.com/gitaarik/8735255) (underneath your
|
||||
cpp directory)
|
||||
|
||||
```
|
||||
git submodule add https://github.com/google/oboe
|
||||
```
|
||||
|
||||
This makes it easier to integrate updates to Oboe into your app, as well as contribute to the Oboe project.
|
||||
|
||||
### 2. Update CMakeLists.txt
|
||||
Open your app's `CMakeLists.txt`. This can be found under `External Build Files` in the Android project view. If you don't have a `CMakeLists.txt` you will need to [add C++ support to your project](https://developer.android.com/studio/projects/add-native-code).
|
||||
|
||||

|
||||
|
||||
Now add the following commands to the end of `CMakeLists.txt`. **Remember to update `**PATH TO OBOE**` with your local Oboe path from the previous step**:
|
||||
|
||||
# Set the path to the Oboe directory.
|
||||
set (OBOE_DIR ***PATH TO OBOE***)
|
||||
|
||||
# Add the Oboe library as a subdirectory in your project.
|
||||
# add_subdirectory tells CMake to look in this directory to
|
||||
# compile oboe source files using oboe's CMake file.
|
||||
# ./oboe specifies where the compiled binaries will be stored
|
||||
add_subdirectory (${OBOE_DIR} ./oboe)
|
||||
|
||||
# Specify the path to the Oboe header files.
|
||||
# This allows targets compiled with this CMake (application code)
|
||||
# to see public Oboe headers, in order to access its API.
|
||||
include_directories (${OBOE_DIR}/include)
|
||||
|
||||
|
||||
In the same file find the [`target_link_libraries`](https://cmake.org/cmake/help/latest/command/target_link_libraries.html) command.
|
||||
Add `oboe` to the list of libraries which your app's library depends on. For example:
|
||||
|
||||
target_link_libraries(native-lib oboe)
|
||||
|
||||
Here's a complete example `CMakeLists.txt` file:
|
||||
|
||||
cmake_minimum_required(VERSION 3.4.1)
|
||||
|
||||
# Build our own native library
|
||||
add_library (native-lib SHARED native-lib.cpp )
|
||||
|
||||
# Build the Oboe library
|
||||
set (OBOE_DIR ./oboe)
|
||||
add_subdirectory (${OBOE_DIR} ./oboe)
|
||||
|
||||
# Make the Oboe public headers available to our app
|
||||
include_directories (${OBOE_DIR}/include)
|
||||
|
||||
# Specify the libraries which our native library is dependent on, including Oboe
|
||||
target_link_libraries (native-lib log oboe)
|
||||
|
||||
|
||||
Now go to `Build->Refresh Linked C++ Projects` to have Android Studio index the Oboe library.
|
||||
|
||||
Verify that your project builds correctly. If you have any issues building please [report them here](issues/new).
|
||||
|
||||
# Using Oboe
|
||||
Once you've added Oboe to your project you can start using Oboe's features. The simplest, and probably most common thing you'll do in Oboe is to create an audio stream.
|
||||
|
||||
## Creating an audio stream
|
||||
Include the Oboe header:
|
||||
|
||||
#include <oboe/Oboe.h>
|
||||
|
||||
Streams are built using an `AudioStreamBuilder`. Create one like this:
|
||||
|
||||
oboe::AudioStreamBuilder builder;
|
||||
|
||||
Use the builder's set methods to set properties on the stream (you can read more about these properties in the [full guide](FullGuide.md)):
|
||||
|
||||
builder.setDirection(oboe::Direction::Output);
|
||||
builder.setPerformanceMode(oboe::PerformanceMode::LowLatency);
|
||||
builder.setSharingMode(oboe::SharingMode::Exclusive);
|
||||
builder.setFormat(oboe::AudioFormat::Float);
|
||||
builder.setChannelCount(oboe::ChannelCount::Mono);
|
||||
|
||||
The builder's set methods return a pointer to the builder. So they can be easily chained:
|
||||
|
||||
```
|
||||
oboe::AudioStreamBuilder builder;
|
||||
builder.setPerformanceMode(oboe::PerformanceMode::LowLatency)
|
||||
->setSharingMode(oboe::SharingMode::Exclusive)
|
||||
->setDataCallback(myCallback)
|
||||
->setFormat(oboe::AudioFormat::Float);
|
||||
```
|
||||
|
||||
Define an `AudioStreamDataCallback` class to receive callbacks whenever the stream requires new data.
|
||||
|
||||
class MyCallback : public oboe::AudioStreamDataCallback {
|
||||
public:
|
||||
oboe::DataCallbackResult
|
||||
onAudioReady(oboe::AudioStream *audioStream, void *audioData, int32_t numFrames) {
|
||||
|
||||
// We requested AudioFormat::Float. So if the stream opens
|
||||
// we know we got the Float format.
|
||||
// If you do not specify a format then you should check what format
|
||||
// the stream has and cast to the appropriate type.
|
||||
auto *outputData = static_cast<float *>(audioData);
|
||||
|
||||
// Generate random numbers (white noise) centered around zero.
|
||||
const float amplitude = 0.2f;
|
||||
for (int i = 0; i < numFrames; ++i){
|
||||
outputData[i] = ((float)drand48() - 0.5f) * 2 * amplitude;
|
||||
}
|
||||
|
||||
return oboe::DataCallbackResult::Continue;
|
||||
}
|
||||
};
|
||||
|
||||
You can find examples of how to play sound using digital synthesis and pre-recorded audio in the [code samples](../samples).
|
||||
|
||||
Declare your callback somewhere that it won't get deleted while you are using it.
|
||||
|
||||
MyCallback myCallback;
|
||||
|
||||
Supply this callback class to the builder:
|
||||
|
||||
builder.setDataCallback(&myCallback);
|
||||
|
||||
Declare a shared pointer for the stream. Make sure it is declared with the appropriate scope. The best place is as a member variable in a managing class or as a global. Avoid declaring it as a local variable because the stream may get deleted when the function returns.
|
||||
|
||||
std::shared_ptr<oboe::AudioStream> mStream;
|
||||
|
||||
Open the stream:
|
||||
|
||||
oboe::Result result = builder.openStream(mStream);
|
||||
|
||||
Check the result to make sure the stream was opened successfully. Oboe has a convenience method for converting its types into human-readable strings called `oboe::convertToText`:
|
||||
|
||||
if (result != oboe::Result::OK) {
|
||||
LOGE("Failed to create stream. Error: %s", oboe::convertToText(result));
|
||||
}
|
||||
|
||||
Note that this sample code uses the [logging macros from here](https://github.com/googlesamples/android-audio-high-performance/blob/master/debug-utils/logging_macros.h).
|
||||
|
||||
## Playing audio
|
||||
Check the properties of the created stream. If you did not specify a channelCount, sampleRate, or format then you need to
|
||||
query the stream to see what you got. The **format** property will dictate the `audioData` type in the `AudioStreamDataCallback::onAudioReady` callback. If you did specify any of those three properties then you will get what you requested.
|
||||
|
||||
oboe::AudioFormat format = mStream->getFormat();
|
||||
LOGI("AudioStream format is %s", oboe::convertToText(format));
|
||||
|
||||
Now start the stream.
|
||||
|
||||
mStream->requestStart();
|
||||
|
||||
At this point you should start receiving callbacks.
|
||||
|
||||
To stop receiving callbacks call
|
||||
|
||||
mStream->requestStop();
|
||||
|
||||
## Closing the stream
|
||||
It is important to close your stream when you're not using it to avoid hogging audio resources which other apps could use. This is particularly true when using `SharingMode::Exclusive` because you might prevent other apps from obtaining a low latency audio stream.
|
||||
|
||||
Streams should be explicitly closed when the app is no longer playing audio.
|
||||
|
||||
mStream->close();
|
||||
|
||||
`close()` is a blocking call which also stops the stream.
|
||||
|
||||
For apps which only play or record audio when they are in the foreground this is usually done when [`Activity.onPause()`](https://developer.android.com/guide/components/activities/activity-lifecycle#onpause) is called.
|
||||
|
||||
## Reconfiguring streams
|
||||
After closing, in order to change the configuration of the stream, simply call `openStream`
|
||||
again. The existing stream is deleted and a new stream is built and
|
||||
populates the `mStream` variable.
|
||||
```
|
||||
// Modify the builder with some additional properties at runtime.
|
||||
builder.setDeviceId(MY_DEVICE_ID);
|
||||
// Re-open the stream with some additional config
|
||||
// The old AudioStream is automatically deleted
|
||||
builder.openStream(mStream);
|
||||
```
|
||||
|
||||
## Example
|
||||
|
||||
The following class is a complete implementation of an audio player that
|
||||
renders a sine wave.
|
||||
```
|
||||
#include <oboe/Oboe.h>
|
||||
#include <math.h>
|
||||
using namespace oboe;
|
||||
|
||||
class OboeSinePlayer: public oboe::AudioStreamDataCallback {
|
||||
public:
|
||||
|
||||
virtual ~OboeSinePlayer() = default;
|
||||
|
||||
// Call this from Activity onResume()
|
||||
int32_t startAudio() {
|
||||
std::lock_guard<std::mutex> lock(mLock);
|
||||
oboe::AudioStreamBuilder builder;
|
||||
// The builder set methods can be chained for convenience.
|
||||
Result result = builder.setSharingMode(oboe::SharingMode::Exclusive)
|
||||
->setPerformanceMode(oboe::PerformanceMode::LowLatency)
|
||||
->setChannelCount(kChannelCount)
|
||||
->setSampleRate(kSampleRate)
|
||||
->setSampleRateConversionQuality(oboe::SampleRateConversionQuality::Medium)
|
||||
->setFormat(oboe::AudioFormat::Float)
|
||||
->setDataCallback(this)
|
||||
->openStream(mStream);
|
||||
if (result != Result::OK) return (int32_t) result;
|
||||
|
||||
// Typically, start the stream after querying some stream information, as well as some input from the user
|
||||
result = outStream->requestStart();
|
||||
return (int32_t) result;
|
||||
}
|
||||
|
||||
// Call this from Activity onPause()
|
||||
void stopAudio() {
|
||||
// Stop, close and delete in case not already closed.
|
||||
std::lock_guard<std::mutex> lock(mLock);
|
||||
if (mStream) {
|
||||
mStream->stop();
|
||||
mStream->close();
|
||||
mStream.reset();
|
||||
}
|
||||
}
|
||||
|
||||
oboe::DataCallbackResult onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) override {
|
||||
float *floatData = (float *) audioData;
|
||||
for (int i = 0; i < numFrames; ++i) {
|
||||
float sampleValue = kAmplitude * sinf(mPhase);
|
||||
for (int j = 0; j < kChannelCount; j++) {
|
||||
floatData[i * kChannelCount + j] = sampleValue;
|
||||
}
|
||||
mPhase += mPhaseIncrement;
|
||||
if (mPhase >= kTwoPi) mPhase -= kTwoPi;
|
||||
}
|
||||
return oboe::DataCallbackResult::Continue;
|
||||
}
|
||||
|
||||
private:
|
||||
std::mutex mLock;
|
||||
std::shared_ptr<oboe::AudioStream> mStream;
|
||||
|
||||
// Stream params
|
||||
static int constexpr kChannelCount = 2;
|
||||
static int constexpr kSampleRate = 48000;
|
||||
// Wave params, these could be instance variables in order to modify at runtime
|
||||
static float constexpr kAmplitude = 0.5f;
|
||||
static float constexpr kFrequency = 440;
|
||||
static float constexpr kPI = M_PI;
|
||||
static float constexpr kTwoPi = kPI * 2;
|
||||
static double constexpr mPhaseIncrement = kFrequency * kTwoPi / (double) kSampleRate;
|
||||
// Keeps track of where the wave is
|
||||
float mPhase = 0.0;
|
||||
};
|
||||
```
|
||||
Note that this implementation computes sine values at run-time for simplicity,
|
||||
rather than pre-computing them.
|
||||
Additionally, best practice is to implement a separate data callback class, rather
|
||||
than managing the stream and defining its data callback in the same class.
|
||||
|
||||
For more examples on how to use Oboe look in the [samples](https://github.com/google/oboe/tree/main/samples) folder.
|
||||
|
||||
## Obtaining optimal latency
|
||||
One of the goals of the Oboe library is to provide low latency audio streams on the widest range of hardware configurations.
|
||||
When a stream is opened using AAudio, the optimal rate will be chosen unless the app requests a specific rate. The framesPerBurst is also provided by AAudio.
|
||||
|
||||
But OpenSL ES cannot determine those values. So applications should query them using Java and then pass them to Oboe. They will be used for OpenSL ES streams on older devices.
|
||||
|
||||
Here's a code sample showing how to set these default values.
|
||||
|
||||
*MainActivity.java*
|
||||
|
||||
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1){
|
||||
AudioManager myAudioMgr = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE);
|
||||
String sampleRateStr = myAudioMgr.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
|
||||
int defaultSampleRate = Integer.parseInt(sampleRateStr);
|
||||
String framesPerBurstStr = myAudioMgr.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
|
||||
int defaultFramesPerBurst = Integer.parseInt(framesPerBurstStr);
|
||||
|
||||
native_setDefaultStreamValues(defaultSampleRate, defaultFramesPerBurst);
|
||||
}
|
||||
|
||||
*jni-bridge.cpp*
|
||||
|
||||
JNIEXPORT void JNICALL
|
||||
Java_com_google_sample_oboe_hellooboe_MainActivity_native_1setDefaultStreamValues(JNIEnv *env,
|
||||
jclass type,
|
||||
jint sampleRate,
|
||||
jint framesPerBurst) {
|
||||
oboe::DefaultStreamValues::SampleRate = (int32_t) sampleRate;
|
||||
oboe::DefaultStreamValues::FramesPerBurst = (int32_t) framesPerBurst;
|
||||
}
|
||||
|
||||
Note that the values from Java are for built-in audio devices. Peripheral devices, such as Bluetooth may need larger framesPerBurst.
|
||||
|
||||
# Further information
|
||||
- [Code samples](https://github.com/google/oboe/tree/main/samples)
|
||||
- [Full guide to Oboe](FullGuide.md)
|
169
externals/oboe/docs/OpenSLESMigration.md
vendored
Normal file
169
externals/oboe/docs/OpenSLESMigration.md
vendored
Normal file
|
@ -0,0 +1,169 @@
|
|||
OpenSLES Migration Guide
|
||||
===
|
||||
|
||||
# Introduction
|
||||
|
||||
This guide will show you how to migrate your code from [OpenSL ES for Android](https://developer.android.com/ndk/guides/audio/opensl/opensl-for-android) (just OpenSL from now on) to Oboe.
|
||||
|
||||
To familiarise yourself with Oboe, please read the [Getting Started guide](https://github.com/google/oboe/blob/main/docs/GettingStarted.md) and ensure that Oboe has been added as a dependency in your project.
|
||||
|
||||
|
||||
# Concepts
|
||||
|
||||
At a high level, OpenSL and Oboe have some similarities. They both create objects which communicate with an audio device capable of playing or recording audio samples. They also use a callback mechanism to read data from or write data to that audio device.
|
||||
|
||||
This is where the similarities end.
|
||||
|
||||
Oboe has been designed to be a simpler, easier to use API than OpenSL. It aims to reduce the amount of boilerplate code and guesswork associated with recording and playing audio.
|
||||
|
||||
|
||||
# Key differences
|
||||
|
||||
|
||||
## Object mappings
|
||||
|
||||
OpenSL uses an audio engine object, created using `slCreateEngine`, to create other objects. Oboe's equivalent object is `AudioStreamBuilder`, although it will only create an `AudioStream`.
|
||||
|
||||
OpenSL uses audio player and audio recorder objects to communicate with audio devices. In Oboe an `AudioStream` is used.
|
||||
|
||||
In OpenSL the audio callback mechanism is a user-defined function which is called each time a buffer is enqueued. In Oboe you construct an `AudioStreamDataCallback` object, and its `onAudioReady` method is called each time audio data is ready to be read or written.
|
||||
|
||||
Here's a table which summarizes the object mappings:
|
||||
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td><strong>OpenSL</strong>
|
||||
</td>
|
||||
<td><strong>Oboe </strong>(all classes are in the <code>oboe</code> namespace)
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Audio engine (an <code>SLObjectItf</code>)
|
||||
</td>
|
||||
<td><code>AudioStreamBuilder</code>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Audio player
|
||||
</td>
|
||||
<td><code>AudioStream</code> configured for output
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Audio recorder
|
||||
</td>
|
||||
<td><code>AudioStream</code> configured for input
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Callback function
|
||||
</td>
|
||||
<td><code>AudioStreamDataCallback::onAudioReady</code>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
||||
|
||||
## Buffers and callbacks
|
||||
|
||||
In OpenSL your app must create and manage a queue of buffers. Each time a buffer is dequeued, the callback function is called and your app must enqueue a new buffer.
|
||||
|
||||
In Oboe, rather than owning and enqueuing buffers, you are given direct access to the `AudioStream`'s buffer through the `audioData` parameter of `onAudioReady`.
|
||||
|
||||
This is a container array which you can read audio data from when recording, or write data into when playing. The `numFrames` parameter tells you how many frames to read/write. Here's the method signature of `onAudioReady`:
|
||||
|
||||
|
||||
```
|
||||
DataCallbackResult onAudioReady(
|
||||
AudioStream *oboeStream,
|
||||
void *audioData,
|
||||
int32_t numFrames
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
You supply your implementation of `onAudioReady` when building the audio stream by constructing an `AudioStreamDataCallback` object. [Here's an example.](https://github.com/google/oboe/blob/main/docs/GettingStarted.md#creating-an-audio-stream)
|
||||
|
||||
|
||||
### Buffer sizes
|
||||
|
||||
In OpenSL you cannot specify the size of the internal buffers of the audio player/recorder because your app is supplying them so they can have arbitrary size. You can only specify the _number of buffers_ through the `SLDataLocator_AndroidSimpleBufferQueue.numBuffers` field.
|
||||
|
||||
By contrast, Oboe will use the information it has about the current audio device to configure its buffer size. It will determine the optimal number of audio frames which should be read/written in a single callback. This is known as a _burst_, and usually represents the minimum possible buffer size. Typical values are 96, 128, 192 and 240 frames.
|
||||
|
||||
An audio stream's burst size, given by `AudioStream::getFramesPerBurst()`, is important because it is used when configuring the buffer size. Here's an example which uses two bursts for the buffer size, which usually represents a good tradeoff between latency and glitch protection:
|
||||
|
||||
|
||||
```
|
||||
audioStream.setBufferSizeInFrames(audioStream.getFramesPerBurst() * 2);
|
||||
```
|
||||
|
||||
|
||||
**Note:** because Oboe uses OpenSL under-the-hood on older devices which does not provide the same information about audio devices, it still needs to know [sensible default values for the burst to be used with OpenSL](https://github.com/google/oboe/blob/main/docs/GettingStarted.md#obtaining-optimal-latency).
|
||||
|
||||
|
||||
## Audio stream properties
|
||||
|
||||
In OpenSL you must explicitly specify various properties, including the sample rate and audio format, when opening an audio player or audio recorder.
|
||||
|
||||
In Oboe, you do not need to specify any properties to open a stream. For example, this will open a valid output `AudioStream` with sensible default values.
|
||||
|
||||
|
||||
```
|
||||
AudioStreamBuilder builder;
|
||||
builder.openStream(myStream);
|
||||
```
|
||||
|
||||
|
||||
However, you may want to specify some properties. These are set using the `AudioStreamBuilder` ([example](https://github.com/google/oboe/blob/main/docs/FullGuide.md#set-the-audio-stream-configuration-using-an-audiostreambuilder)).
|
||||
|
||||
|
||||
## Stream disconnection
|
||||
|
||||
OpenSL has no mechanism, other than stopping callbacks, to indicate that an audio device has been disconnected - for example, when headphones are unplugged.
|
||||
|
||||
In Oboe, you can be notified of stream disconnection by overriding one of the `onError` methods in `AudioStreamErrorCallback`. This allows you to clean up any resources associated with the audio stream and create a new stream with optimal properties for the current audio device ([more info](https://github.com/google/oboe/blob/main/docs/FullGuide.md#disconnected-audio-stream)).
|
||||
|
||||
|
||||
# Unsupported features
|
||||
|
||||
|
||||
## Formats
|
||||
|
||||
Oboe audio streams only accept [PCM](https://en.wikipedia.org/wiki/Pulse-code_modulation) data in float or signed 16-bit ints. Additional formats including 8-bit unsigned, 24-bit packed, 8.24 and 32-bit are not supported.
|
||||
|
||||
Compressed audio, such as MP3, is not supported for a number of reasons but chiefly:
|
||||
|
||||
|
||||
|
||||
* The OpenSL ES implementation has performance and reliability issues.
|
||||
* It keeps the Oboe API and the underlying implementation simple.
|
||||
|
||||
Extraction and decoding can be done either through the NDK [Media APIs](https://developer.android.com/ndk/reference/group/media) or by using a third party library like [FFmpeg](https://ffmpeg.org/). An example of both these approaches can be seen in the [RhythmGame sample](https://github.com/google/oboe/tree/main/samples/RhythmGame).
|
||||
|
||||
|
||||
## Miscellaneous features
|
||||
|
||||
Oboe does **not** support the following features:
|
||||
|
||||
|
||||
|
||||
* Channel masks - only [indexed channel masks](https://developer.android.com/reference/kotlin/android/media/AudioFormat#channel-index-masks) are supported.
|
||||
* Playing audio content from a file pathname or [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier).
|
||||
* Notification callbacks for position updates.
|
||||
* Platform output effects on API 27 and below. [They are supported from API 28 and above.](https://github.com/google/oboe/wiki/TechNote_Effects)
|
||||
|
||||
|
||||
# Summary
|
||||
|
||||
|
||||
|
||||
* Replace your audio player or recorder with an `AudioStream` created using an `AudioStreamBuilder`.
|
||||
* Use your value for `numBuffers` to set the audio stream's buffer size as a multiple of the burst size. For example: `audioStream.setBufferSizeInFrames(audioStream.getFramesPerBurst * numBuffers)`.
|
||||
* Create an `AudioStreamDataCallback` object and move your OpenSL callback code inside the `onAudioReady` method.
|
||||
* Handle stream disconnect events by creating an `AudioStreamErrorCallback` object and overriding one of its `onError` methods.
|
||||
* Pass sensible default sample rate and buffer size values to Oboe from `AudioManager` [using this method](https://github.com/google/oboe/blob/main/docs/GettingStarted.md#obtaining-optimal-latency) so that your app is still performant on older devices.
|
||||
|
||||
For more information please read the [Full Guide to Oboe](https://github.com/google/oboe/blob/main/docs/FullGuide.md).
|
11
externals/oboe/docs/PrivacyPolicy.md
vendored
Normal file
11
externals/oboe/docs/PrivacyPolicy.md
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
[Home](README.md)
|
||||
|
||||
# Oboe Privacy Policy
|
||||
|
||||
Oboe is a library that simply passes audio between the application and the native Android APIs.
|
||||
|
||||
Oboe does not collect any user data or information.
|
||||
|
||||
Oboe does not create Cookies.
|
||||
|
||||
Oboe is considered "kid safe".
|
11
externals/oboe/docs/README.md
vendored
Normal file
11
externals/oboe/docs/README.md
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
Oboe documentation
|
||||
===
|
||||
- [Android Audio History](AndroidAudioHistory.md)
|
||||
- [API reference](https://google.github.io/oboe/)
|
||||
- [Apps using Oboe](https://github.com/google/oboe/wiki/AppsUsingOboe)
|
||||
- [FAQs](FAQ.md)
|
||||
- [Full Guide to Oboe](FullGuide.md)
|
||||
- [Getting Started with Oboe](GettingStarted.md)
|
||||
- [Privacy Policy](PrivacyPolicy.md)
|
||||
- [Releases](https://github.com/google/oboe/releases)
|
||||
- [Wiki](https://github.com/google/oboe/wiki)
|
BIN
externals/oboe/docs/images/cmakelists-location-in-as.png
vendored
Normal file
BIN
externals/oboe/docs/images/cmakelists-location-in-as.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 51 KiB |
BIN
externals/oboe/docs/images/getting-started-video.jpg
vendored
Normal file
BIN
externals/oboe/docs/images/getting-started-video.jpg
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 151 KiB |
BIN
externals/oboe/docs/images/oboe-lifecycle.png
vendored
Normal file
BIN
externals/oboe/docs/images/oboe-lifecycle.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 23 KiB |
BIN
externals/oboe/docs/images/oboe-sharing-mode-exclusive.jpg
vendored
Normal file
BIN
externals/oboe/docs/images/oboe-sharing-mode-exclusive.jpg
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 10 KiB |
BIN
externals/oboe/docs/images/oboe-sharing-mode-shared.jpg
vendored
Normal file
BIN
externals/oboe/docs/images/oboe-sharing-mode-shared.jpg
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 15 KiB |
5
externals/oboe/docs/index.md
vendored
Normal file
5
externals/oboe/docs/index.md
vendored
Normal file
|
@ -0,0 +1,5 @@
|
|||
---
|
||||
layout: default
|
||||
title: Home
|
||||
---
|
||||
Oboe is an audio library for Android.
|
Loading…
Add table
Add a link
Reference in a new issue