ODIN Unreal Engine FAQs - Needs Rework

This file holds all questions and answers that need to be reworked.

These are questions from users that have been asked in the AI Assistant and in our Discord channel. The answers provided here are the answers from the ODIN AI Assistant.

Please go through each of these questions, optimize the answer and transfer the question and answer to the faq.md file and remove it from this file.

Hello! Is there any way to change OdinSynth component’s Source Effect Chain at runtime (for example, muffling when under water)?

Sorry, I don’t know how to help with that. You can visit the ODIN documentation for more info.

How to implement push to talk instead of open mic?

To implement Push-To-Talk (PTT) instead of an open mic in your Unreal Engine project with ODIN, you will follow these steps:

  1. Create an Audio Capture object to get input from the microphone.

  2. Construct a Local Media object to handle the local audio media stream.

  3. Add the Audio Capture object to the Local Media by using the Add Media To Room node.

  4. Save the Audio Capture object to a variable.

  5. Instead of continuously capturing audio from the microphone, you can start and stop the audio capture when the PTT button is pressed and released, respectively. By doing this, you can control the microphone to only send audio data when the PTT button is engaged.

Here is a snippet from the ODIN documentation that explains the last step, which is relevant to implementing PTT:

“Since you need to start capturing from the audio device, we save the Audio Capture object in a variable and use that later to activate the microphone. By stopping the audio capture you can implement mute very easily later or something like push to talk.”

In practical terms, you would have to do the following in Blueprints:

  • Bind a key or button press to an event that triggers the Start Audio Capture on the audio capture device.
  • Bind the corresponding key or button release to an event that triggers the Stop Audio Capture on the audio capture device.

By using these events with the bound key or button, you effectively implement Push-To-Talk functionality.

Unfortunately, without access to the images in the documentation, I cannot show you the exact Blueprint setup. However, the steps provided above should give you a clear understanding of what needs to be done to implement PTT using the ODIN Unreal Engine SDK. If you need to reference the ODIN nodes mentioned above, you can visit the ODIN documentation.

How to implement multiple rooms for one player to be in? Will i need separate audio captures for every one? Or is it fine to just have multiple medias for each room from the same capture?

ODIN supports native multi-room functionality. This allows every user to join as many rooms as they want at the same time. There is no additional documentation provided specifying the need for separate audio captures for each room. However, considering ODIN’s design to support multiple rooms, it is likely that only one audio capture is necessary, and multiple media streams can be created from the same audio source to be sent to each room the user is connected to.

For further details or specific implementation guidelines in the Unreal Engine SDK, please refer to the ODIN documentation.

Using voicechat, i can hear myself, but it is not echo, as it also happens when i am alone in the server?

It seems that you are experiencing echo issues while using the voice chat. Although I did not find a specific mention of ’echo’ or related settings in the documentation provided, there are some audio processing settings that might affect and potentially help with echo issues.

For instance, enabling Voice Activity Detection (VAD) and setting up Volume Gate correctly helps to ensure only audio with a high enough volume (likely to be the speaker’s voice and not an echo or background noise) is transmitted. Check and adjust the following values if you have these settings enabled:

  • Attack Loudness (dBFS): Root mean square power when the volume gate should engage.
  • Release Loudness (dBFS): Root mean square power when the volume gate should disengage.

Generally, enabling features like Noise Suppression, Gain Controller, and possibly High Pass Filter can help make the transmitted audio cleaner and may reduce the chance of echos being transmitted.

If these do not solve the issue, please make sure that your microphone and speaker settings are configured correctly in your system to minimize echo. This typically involves reducing the volume of the speakers or using headphones, and ensuring that the microphone is not too close to the speakers.

If you need further assistance with setting values for these features or implementing additional solutions, consider contacting support for more specific guidance.

I followed the documentation on how to set up Enable 3D Audio. However, the attenuation is not working. Why is that?

If 3D Audio attenuation is not working in your setup, there might be several factors to consider based on the documentation. The documentation outlines the importance of a few key steps when enabling 3D Audio, which involves the correct assignment of Attenuation Settings to the Odin Synth Component as well as the positioning of these components in your scene correctly. Here are some potential areas to check:

  1. Attenuation Settings: Confirm that the Odin Synth Components have proper Attenuation Settings assigned to them. These settings control how audio is affected by distance and other spatial attributes. If they are not set correctly, you might not experience the 3D Audio as expected.

  2. Odin Synth Component Positioning: The Odin Synth Components should be correctly attached to the Pawns that represent the respective players. This is important for ensuring that the audio positioning matches the in-game location of the players.

  3. Player Identifier and Replication: The system needs to keep track of which Odin Peer is associated with which player. This often involves replicating a unique identifier for each player, their actors, and their Odin Peer IDs across the server and clients. Check your Player Character blueprint or associated component to see if the logic has been moved there as recommended.

  4. Game Instance Map: Ensure that you have a map in your Default Game Instance blueprint that keeps track of the actors with a GUID type for the keys and your Default Player Character class for the values. This map is used to track the correct assignment of Odin Synth Components to Player Character objects.

  5. Execution Order: It is also crucial that the Start Audio Capture node is only executed after successfully creating the audio capture with Create Audio Capture, constructing the media with Create Media, and adding it to the room with Add Media to Room.

Please verify these aspects of your setup, and also consider the overall flow of how the identifier is propagated in the game session, as these are the factors that could affect 3D Audio functionality with respect to attenuation【15†source】.

How to prevent audio feedback on odin implementation?

To prevent audio feedback in an ODIN implementation, you can use the following settings:

  1. Voice Activity Detection: This system captures a few milliseconds of audio data and analyzes it to determine if the user is speaking. An advanced AI model is used to achieve this, and it typically gives a probability between 0.0 (no voice detected) and 1.0 (voice detected). You should have this feature enabled to prevent transmission of unwanted noise or silence.

  2. Noise Suppression: When enabled, the noise suppressor will remove distracting background noise from the input audio signal. It is possible to control the aggressiveness of the suppression. Increasing the level of suppression will reduce the noise level but at the risk of higher speech distortion.

For additional strategies on configuring audio settings to prevent feedback, consider referring to the ODIN documentation for more tips and best practices.

What is attack and release probability?

“Attack Probability” and “Release Probability” are settings related to Voice Activity Detection (VAD) in ODIN, which help determine when the system should start and stop transmitting voice.

  • Attack Probability: This is the voice probability value at which the VAD should engage. It defines the lower boundary of probability that the VAD system uses to determine if the audio signal contains speech and begins transmitting.

  • Release Probability: This is the voice probability value at which the VAD should disengage. It defines the upper boundary for concluding that the current audio signal no longer contains speech and stops transmitting.

For example, good values to start with might be 0.9 for the attack probability, indicating that if the AI is 90% sure that it’s detecting a voice, it will engage and begin transmission. For the release probability, a value such as 0.8 might be suitable, indicating that when the AI’s certainty drops to 80% or below, the VAD will disengage and stop transmission.

What about attack and release loudness

Attack Loudness and Release Loudness are both related to the Volume Gate feature:

  • Attack Loudness (dBFS): This value specifies the root mean square power (in decibels relative to full scale, or dBFS) at which the volume gate should engage and start transmitting the voice. This means that the microphone is allowed to transmit when the loudness of the audio signal reaches this threshold.

  • Release Loudness (dBFS): This value specifies the root mean square power (in dBFS) at which the volume gate should disengage and stop transmitting the voice. This happens when the loudness of the audio signal drops below this threshold.

A good starting point for these settings could be -40 dBFS for release and -30 dBFS for attack. These thresholds determine when the user’s microphone should be enabled based on the volume of their voice, thereby preventing transmissions that are too quiet or might be background noise.

The volume gate mechanism involves threshold settings that determine at what loudness levels the microphone should be enabled or disabled during voice communication:

  • Attack Loudness: This is the threshold at which the microphone should be enabled. It represents the level of loudness that must be met for the system to start transmitting the user’s voice. So if the AI detects a voice signal that is not loud (possibly background noise), the microphone won’t transmit unless the signal reaches the attack loudness threshold.

  • Release Loudness: This is the threshold level for the microphone to be disabled. When the volume of the signal drops below this level, the system stops transmitting the user’s voice since it may be too soft or may no longer be present.

A recommended starting point for these settings is -30 dBFS for the attack threshold and -40 dBFS for the release threshold. These values can be adjusted according to the specific needs and environment to ensure only the intended voice is captured and transmitted【26†source】.

Why -30 dbfs?

The value of -30 dBFS as a starting point for the attack loudness threshold is chosen based on general knowledge about audio levels and the typical loudness of human speech. dBFS stands for decibels relative to full scale, which is a unit of measurement for audio levels in digital systems, where 0 dBFS represents the maximum possible digital level.

When setting an attack loudness threshold, the goal is to find a balance where the threshold is low enough to pick up human speech but high enough to ignore most background noise. At -30 dBFS, the threshold is set to a level that should capture voice which is loud enough to be considered intentional speech, rather than accidental sounds or distant background noise.

By setting the threshold at -30 dBFS for attack loudness, you are instructing the voice communication system to only transmit audio when the detected voice signal is relatively clear and strong, minimizing the chance of transmitting unintended sounds.

The specific value can be adjusted further based on user experience, the environment in which the communication system is used, and the sensitivity of the microphones being used. It’s a starting point suggested based on typical scenarios, but each implementation might require tweaking these values for optimal performance.

How to detect if any player speaking?

To detect if any player is speaking, you can use the Voice Activity Detection (VAD) system provided by ODIN. VAD captures a few milliseconds of audio data and analyzes it using an advanced AI model to determine the presence of speech. It gives a probability between 0.0 and 1.0, where 0.0 indicates no voice detected and 1.0 indicates absolute certainty in voice detection.

To make use of VAD in ODIN, you can enable Voice Activity Detection and define the probability required to start (attack probability) and stop (release probability) transmitting. You can set these probabilities to values where you might consider the AI to be sufficiently certain that it has detected voice - for instance, 0.9 for attack probability (90% sure it’s a voice) and 0.8 for release probability.

This system works well in most scenarios, but may require additional filtering in environments with much background noise or multiple people talking near the player. In such cases, a Volume Gate filter can be applied, which utilizes loudness thresholds to enable or disable the microphone based on the volume of the voice signal【7†source】.

By implementing these settings, you can accurately detect when players in the game are speaking and manage voice communication accordingly.

Is there any event to be listened to detect if any remote client speak?

To detect when any remote client starts speaking, you should listen for the On Media Added event. This event is triggered whenever a user connected to the room activates their microphone. After joining a room, you will receive the On Media Added events for all active users in that room.

This allows you to know when a remote client has added media, such as starting an audio stream from their microphone, indicating that they might start speaking. It is essential to have your event handling set up before joining a room so that you can capture these events as they occur.

For more information on handling events related to media and voice activity in ODIN, you can consult the Unreal Engine SDK documentation provided by 4Players ODIN.

What is best practice to handle mute and un-mute audio capture?

The best practice to handle mute and un-mute audio capture in an ODIN implementation with Unreal Engine is to utilize the Audio Capture object. By starting and stopping the audio capture using this object, you can effectively manage muting and un-muting the microphone.

To implement mute functionality, you can stop the audio capture by calling a function or event that interrupts the audio stream from the user’s microphone to the ODIN servers. For un-muting, you would start the audio capture again, re-enabling the microphone and allowing the user’s voice to be captured and transmitted to other users in the room.

This can be set up in a way that allows for different types of voice communication, such as push-to-talk or toggle mute, where the user has control over when their microphone is active. It’s important to note that when muting, it is not just about ceasing audio transmission, but actually stopping the audio capture to ensure that no data is being captured or leaving the user’s device【0†source】【6†source】.

Here’s a simplified graph borrowed from the documentation illustrating stopping a media in a sequence diagram:

sequenceDiagram
  participant Local
  participant Server
  participant Remote

  Local ->>+ Server: StopMedia

In this case, Local represents the user’s client, while Server represents the ODIN server, and StopMedia is the action of disabling audio capture, thus muting the user. You would have a corresponding StartMedia action to unmute and re-enable audio capture【5†source】.

Being able to control this capture is powerful for managing voice chat and preventing unintentional broadcasting of background noise, ensuring privacy and a better user experience for all participants in the voice chat.

How i call startMedia?

Sorry, I don’t know how to help with that. You can visit the ODIN documentation for more info.

What is OdinMediaSoundGenerator and how i can use that?

The OdinMediaSoundGenerator is likely referred to as the Odin Synth Component in the ODIN Unreal Engine SDK. This component is used to generate audio output from incoming media streams in your ODIN application.

To use the Odin Synth Component, you can add it to your player asset (preferably close to where the player’s head would be in the game) so that it can play incoming audio at the player’s location. You can add this component to your asset at design time or create and attach it to your player actor at runtime using the Add Odin Synth Component node in your blueprint.

Here is the basic usage of the Odin Synth Component:

  1. Add the Odin Synth Component to your player asset or use the Add Odin Synth Component node to create and attach it at runtime.
  2. Get the component in a blueprint using the Get Component By Class node, specifying Odin Synth Component.
  3. Assign media to the Odin Synth Component. This component will take care of generating the audio output from the media stream.

The Odin Synth Component then handles the playback of audio streams for each player, allowing for spatialized audio and other audio effects based on the game’s requirements.

For a detailed example and documentation on usage, you may refer to this documentation page.

How to know is player speaking or not?

Sorry, I don’t know how to help with that. You can visit the ODIN documentation for more info.

In what ways can I process voice audio?

In the Unreal Engine SDK, you can process voice audio through a variety of features provided by ODIN. Here are some of the ways you can process voice audio:

  1. Voice Activity Detection: This AI-powered system analyzes audio data to determine if the user is speaking, and you can set probabilities for when the AI should start and stop transmitting.

  2. Volume Gate: This feature allows you to set a volume threshold that must be met before the microphone is enabled, enabling you to prevent quiet background noises from being transmitted.

  3. Attack Loudness (dBFS) & Release Loudness (dBFS): These settings define the root mean square power levels when the volume gate should engage or disengage.

  4. High Pass Filter: When enabled, it removes low-frequency content from the input audio signal.

  5. Pre Amplifier: Boosts the signal of sensitive microphones, making weak audio signals louder.

  6. Noise Suppression: Removes distracting background noise from the input audio signal with adjustable suppression levels.

  7. Transient Suppression: Detects and attenuates keyboard clicks.

  8. Gain Controller: Automatically brings the input signal to an appropriate range, amplifying low-volume signals, and limiting high-volume ones.

These features are part of the ODIN APM (Audio Processing Module) settings which can be customized to suit your voice processing needs. To use these settings, you would need to construct an APM settings object in Unreal Engine using ODIN SDK functions【0†source】.

Can I integrate unreal engine metasounds?

Yes, you can integrate Unreal Engine MetaSounds with ODIN since ODIN is agnostic of the audio engine. You can use any audio engine you like, including the native Unreal Audio Engine or third-party engines. To accommodate MetaSounds or any other audio system, you simply need to adjust the project settings and the attenuation settings of the ODIN Synth Component accordingly.

Can I use metasound DSP to affect real time voice output from odin?

Sorry, I don’t know how to help with that. You can visit the ODIN documentation for more info.

How to find out of a user is already in a room or not?

To determine if a user is already in a room, ODIN does not require any bookkeeping as a room is automatically created when the first user connects to it and is removed when the last user leaves. When a user joins a room, ODIN triggers events to notify other peers in the room.

In Unreal Engine, when a user joins a room, the following events are triggered:

  • Local: On Room Joined event
  • Remote: On Peer Joined event

If a user joins a room, you can handle the On Room Joined event to determine if the user has successfully joined the room. Similarly, the On Peer Joined event will trigger for all other peers already in the room, informing them that a new peer has joined. By handling these events, you can update the UI or manage the list of peers to reflect who is currently in the room【19†source】.

For more details on joining a room and event handling in ODIN Unreal Engine SDK, please refer to the Join Room function documentation here.

How do I know if I successfully left a room?

In the ODIN Unreal Engine SDK, when you leave a room, other peers in the room will be notified through the On Peer Left event. This event is called whenever a peer has left the room. However, it’s important to note that this event is not called for the local player—the one who actually left the room. Therefore, to know if you (the local player) have successfully left a room, you will need to handle the leaving process on the client side and assume that the operation was successful upon making the call, since there is no specific event that informs the local player they have left a room【23†source】.

For handling events when other peers leave the room, you can refer to the On Peer Left event in the ODIN documentation.

How do I leave a room?

To leave a room in ODIN, you need to call the LeaveRoom method on the OdinHandler instance. Here is a simplified example in C#:

class Radio {
    private string _currentRadio = null;

    public void OnRadioChannelChanged(int channel) {
        if (_currentRadio != null) {
            OdinHandler.Instance.LeaveRoom(_currentRadio);
        }
        // Change the current radio channel after leaving the previous one
        _currentRadio = "Radio_Channel_" + channel.ToString();
        // Code to join the new room would follow here
    }
}

In this example, when changing the radio channel, the player first leaves the current room by passing the room identifier to the LeaveRoom method. After leaving the room, you can then connect to another room or perform other actions as needed【27†source】.

For more details on how to integrate this within your application, you would look for the Unreal Engine equivalent, which uses the same conceptual function but within the UE4/UE5 framework and blueprints. Unfortunately, the exact Unreal blueprint node was not found in the search, so you may need to check the Unreal Engine SDK documentation for the specific blueprint node to use for leaving a room.

How do I leave a room with Odin Unreal SDK?

Sorry, I don’t know how to help with that. You can visit the ODIN documentation for more info.

hello, i followed till this tutorial https://youtu.be/MfZsbYhtUlU but spatial audio does not seem to work as even i go far, i can clearly listen to my voice .. i added attenuation too but still doesn’t seem to work

When working with spatial audio and observing issues such as not experiencing attenuation (diminished volume with distance), it’s essential to ensure that the audio setup within the Unreal Engine environment is correctly configured. If you’ve followed a tutorial and spatial audio does not seem to be working as expected, here are some general steps you can verify in your project:

  1. Check Attenuation Settings: Make sure the attenuation settings are correctly applied to the audio source. The attenuation should define how sound diminishes over distance.

  2. Verify Spatialization: Ensure that spatialization is enabled for the audio component. Spatialization is what makes the audio sound like it’s coming from a specific location in 3D space.

  3. Correct Audio Component Configuration: Double-check that the audio component is correctly configured to use spatial audio features, including the correct audio asset that supports spatialized playback.

  4. Listener Position and Orientation: Confirm that the listener’s position and orientation in the game (usually the player’s camera or character) are being updated correctly. The perceived audio is relative to the listener’s position.

  5. Update to Latest SDK: Sometimes, updating to the latest version of the SDK can resolve unknown issues that may have been fixed in a newer release.

  6. Follow Official Documentation: While tutorials are helpful, always cross-reference with the official ODIN documentation to ensure that all steps are executed according to the latest guidelines.

  7. Debugging: Use diagnostic tools or debug output to check if the spatial parameters are changing as expected when the distance between the audio source and the listener changes.

  8. Engine Integration: Ensure that ODIN is correctly integrated with Unreal Engine’s audio system if there are unique requirements or settings to be made for spatial audio to work.

If after checking all these points the issue persists, I would recommend consulting the official documentation for specific information on troubleshooting spatial audio issues, as well as any community or support channels that may provide additional guidance.

Unfortunately, without access to the specific Unreal SDK documentation sections related to spatial audio, I cannot provide a step-by-step solution. Please refer to the official ODIN Unreal SDK documentation for detailed guidance on configuring spatial audio within your project.