发明名称 Surround sound in a sensory immersive motion capture simulation environment
摘要 A wearable computing device of the listener entity can receive 3D motion data of a virtual representation of the listener entity, 3D motion data of a virtual representation of a sound emitter entity and audio data. The audio data may be associated with an audio event triggered by the sound emitter entity in a capture volume. The wearable computing device of the listener entity can process the 3D motion data of the virtual representation of a listener entity, the 3D motion data of the virtual representation of the sound emitter entity and the audio data to generate a multi channel audio output data customized to the perspective of the virtual representation of a first entity. The multi channel audio output data may be associated with the audio event. The multi channel audio output data can be communicated to the listener entity through a surround sound audio output device.
申请公布号 US8825187(B1) 申请公布日期 2014.09.02
申请号 US201213421444 申请日期 2012.03.15
申请人 Motion Reality, Inc. 发明人 Hamrick Cameron Travis;Madsen Nels Howard;McLaughlin Thomas Michael
分类号 G06F17/00 主分类号 G06F17/00
代理机构 King & Spalding 代理人 King & Spalding
主权项 1. A computer program product tangibly embodied in a non-transitory storage medium and comprising instructions that when executed by a processor perform a method, the method comprising: receiving, by a wearable computing device of a first entity, audio data that is generated responsive to a second entity triggering an audio event in a capture volume; receiving, by the wearable computing device of the first entity, three dimensional (3D) motion data of a virtual representation of the first entity in a simulated virtual environment, the 3D motion data of the virtual representation of the first entity is calculated based on 3D motion data of the first entity in the capture volume; receiving, by the wearable computing device of the first entity, 3D motion data of a virtual representation of the second entity in the simulated virtual environment; and processing the audio data, the 3D motion data of the virtual representation of the first entity and the 3D motion data of a virtual representation of the second entity to generate multi-channel audio output data customized to a perspective of the virtual representation of the first entity in the simulated virtual environment, wherein the multi-channel audio output data is associated with the audio event, and wherein generating the multi-channel audio output data comprises: updating, at a sound library module of the wearable computing device, the 3D motion data of the virtual representation of the first entity in the simulated virtual environment,updating, at the sound library module, the 3D motion data of the virtual representation of the second entity in the simulated virtual environment,calculating, by the sound library module, a distance between the virtual representation of the first entity and the virtual representation of the second entity in the simulated virtual environment, andcalculating, by the sound library module, a direction of the virtual representation of the second entity in reference to the virtual representation the first entity in the simulated virtual environment, wherein the direction and the distance is calculated based on based on at least one of the updated 3D motion data of the virtual representation of the first entity and the updated 3D motion data of the virtual representation of the second entity.
地址 Marietta GA US