AI

Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning

1 Mins read

We present Spatial LibriSpeech, a spatial audio dataset with over 570 hours of 19-channel audio, first-order ambisonics, and optional distractor noise. Spatial LibriSpeech is designed for machine learning model training, and it includes labels for source position, speaking direction, room acoustics and geometry. Spatial LibriSpeech is generated by augmenting LibriSpeech samples with >220k simulated acoustic conditions across >8k synthetic rooms. To demonstrate the utility of our dataset, we train models on four fundamental spatial audio tasks, resulting in a median absolute error of 6.60° on 3D source localization, 0.43m on distance, 90.66ms on T30, and 2.74dB on direct-to-reverberant ratio estimation. We show that the same models transfer to widely-used evaluation datasets, obtaining, for instance, a median absolute error of 12.43° on 3D source localization on TUT Sound Events 2018, and 157.32ms on T30 estimation on ACE Challenge.


Source link

Related posts
AI

Solutions for Common Proxy Errors and Troubleshooting Tips

4 Mins read
Proxy errors occurs when a proxy server fails to connect to the internet or a target server, often due to connectivity issues,…
AI

Unveiling Privacy Risks in Machine Unlearning: Reconstruction Attacks on Deleted Data

3 Mins read
Machine unlearning is driven by the need for data autonomy, allowing individuals to request the removal of their data’s influence on machine…
AI

Quasar-1: A Rigorous Mathematical Framework for Temperature-Guided Reasoning in Language Models

2 Mins read
Large language models (LLMs) encounter significant difficulties in performing efficient and logically consistent reasoning. Existing methods, such as CoT prompting, are extremely…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *