DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Terminal Disclaimer
The terminal disclaimer filed on 11/24/25 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of the full statutory term has been reviewed and is accepted. The terminal disclaimer has been recorded.
Response to Amendment
The applicant’s amended claim(s) which focused on “user’s position and accompanying both forward and backward rule as field processor process the sound field in relation to the spatial domain having a forward transform rule for transform the sound field representation from an audio signal domain into the transform domain and a backward transform rule for transforming a transformed sound field representation from the spatial transform domain to the audio signal domain” have been further considered and rejected over new ground of rejection.
Allowable Subject Matter
Claim(s) 5-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 7-10, 14, 18-19, 21, 24-28, 30, 33-34 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2).
Claim 1, Benattar disclose of an apparatus for processing a sound field representation related to a defined reference point or a defined listening orientation for the sound field representation, comprising: a sound field processor for processing the sound field representation using a deviation of a target listening position from the defined reference point or of a target listening orientation from the defined listening orientation, to acquire a processed sound field description, wherein the processed sound field description, when rendered, provides an impression of the sound field representation at the target listening position being different from the defined reference point or for the target listening orientation being different from the defined listening orientation (fig.3-4 (306); col.10 line 55-60; col.11 line 15-40/sound processor to render impression of sound at target listening position or orientation ) & wherein the sound field processor is configured to process the sound field representation so that the deviation or the spatial filter is applied to the sound field representation in relation to a spatial transform domain having associated therewith a forward transform rule for transform the sound field representation from an audio signal domain into the transform domain (fig.3 (306); col.8 line 50-67; col.10 line 50-61; col.14 line 30-55/the sound representation relate to listening orientation and Fourier forward rule).
However, the prior art never mentioned of both forward and backward rule as field processor process the sound field in relation to the spatial domain having a forward transform rule for transform the sound field representation from an audio signal domain into the transform domain and a backward transform rule for transforming a transformed sound field representation from the spatial transform domain to the audio signal domain.
But the prior art as in You disclose of a processor process the sound field in relation to the spatial domain having a forward transform rule for transform the sound field representation from an audio signal domain into the transform domain and a backward transform rule for transforming a transformed sound field representation from the spatial transform domain to the audio signal domain (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30). Thus one of the ordinary skills in the art could have modified the prior art by adding such aspect related to processor process the sound field in relation to the spatial domain having a forward transform rule for transform the sound field representation from an audio signal domain into the transform domain and a backward transform rule for transforming a transformed sound field representation from the spatial transform domain to the audio signal domain so as to convert the composite signal back to time domain for playback.
The apparatus of claim 1, further comprising a detector for detecting the deviation of the target listening position from the defined reference point or for detecting the deviation of the target listening orientation from the defined listening orientation or for detecting the target listening position and for determining the deviation of the target listening position from the defined reference point or for detecting the target listening orientation and for determining the deviation of the target listening orientation from the defined listening orientation (Ben- col.10 line 55-60; col.11 line 15-40).
The apparatus of claim 1, wherein the sound field representation comprises a plurality of audio signals in an audio signal domain different from the spatial transform domain, wherein the sound field processor is configured to generate the processed sound field description in the audio signal domain different from the spatial transform domain (Ben-col.7 line 48-55/the spatial in three variable domain and other audio frequency domain).
(Currently Amended) The apparatus according to claim 1, wherein the sound field processor is configured to process the sound field representation using the forward transform rule for the spatial transform, the forward transform rule being related to a set of virtual speakers at a set of the virtual positions of the virtual speakers, and using the backward transform rule for the spatial transform using a set of modified virtual positions of the virtual speakers derived from the set of the virtual positions of the virtual speakers using the deviation, or wherein the sound field processor is configured to process the sound field representation using the forward transform rule for the spatial transform, the forward transform rule being related to a set of virtual speakers at a set of virtual positions of the virtual speakers, using [[the]] a spatial filter within the spatial transform domain; and using the backward transform rule for the spatial transform using a set of modified virtual positions of the virtual speakers derived from the set of virtual positions of the virtual speakers using the deviation (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30).
7. (Currently Amended) The apparatus according to claim 1, wherein the sound field processor is configured to forward transform the sound field representation from the audio signal domain into a spatial transform domain using the forward transform rule to acquire virtual speaker signals for virtual speakers at pre-defined virtual positions of virtual speakers related to the defined reference point or the defined listening orientation, and to backward transform the virtual speaker signals into the audio signal domain using the backward transform rule based on the modified virtual positions of the virtual speakers related to the target listening position (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30).
8. (Currently Amended) The apparatus according to claim 1, wherein the sound field processor is configured to calculate the forward transform rule and [[the]] a first spatial filter and to combine the forward transform rule and the spatial filter to acquire a partial transformation definition, to apply the partial transformation definition to the sound field representation to acquire filtered virtual speaker signals, and to backward transform the filtered virtual speaker signals using the backward transform rule based on modified virtual positions of virtual speakers related to the target listening position or the target listening orientation or based on virtual positions of the virtual speakers related to the defined reference point or defined listening orientation, or wherein the sound field processor is configured to calculate [[the]] a second spatial filter and the backward transform rule based on the modified virtual positions of the virtual speakers related to the target listening position or the target listening orientation or the virtual positions of the virtual speakers related to the defined reference point or listening orientation, to combine the spatial filter and the backward transform rule to acquire a partial transformation definition, to forward transform the sound field representation from the audio signal domain into the spatial transform domain to acquire virtual speaker signals for virtual speakers at predefined virtual positions of the virtual speakers, and to apply the partial transformation definition to the virtual speaker signals (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30).
9. The apparatus according to claim 1, wherein at least one of the forward transform rule, the backward transform rule, a transformation definition or a partial transformation definition or a pre-calculated transformation definition comprises a matrix, or wherein the audio signal domain is a time domain or a time-frequency domain (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30).
10. The apparatus according to claim 1, wherein the sound field representation comprises a plurality of Ambisonics signals, and wherein the sound field processor is configured to calculate the forward transform rule using a plain wave decomposition and virtual positions of virtual speakers related to the defined listening position or the defined listening orientation, or wherein the sound field representation comprises a plurality of loudspeaker channels for a defined loudspeaker setup comprising a sweet spot, wherein the sweet spot represents the defined reference position, and wherein the sound field processor is configured to calculate the forward transform rule using an upmix rule or a downmix rule of the loudspeaker channels into a virtual loudspeaker setup comprising virtual speakers at virtual positions related to the sweet spot, or wherein the sound field representation comprises a plurality of real or virtual microphone signals related to an array center as the defined reference position, and wherein the sound field processor is configured to calculate the forward transform rule as beamforming weights representing a beamforming operation for each virtual position of a virtual speaker of the virtual speakers on the plurality of microphone signals, or wherein the sound field representation comprises an audio object representation comprising a plurality of audio objects comprising associated position information, and wherein the sound field processor is configured to calculate the forward transform rule representing a panning operation for panning the audio objects to the virtual speakers at the virtual speaker positions related to the defined reference position using the position information for the audio objects (You-fig.1 (12-36); col.4 line 24-27 & col.7 line 55-67; col.9 line 15-30/the forward transform representing panning for the audio object).
14. The apparatus according to claim 1, wherein the processed sound field description comprises a plurality of Ambisonics signals, and wherein the sound field processor is configured to calculate the backwards transform rule using a harmonic decomposition representing a weighted sum over all virtual speaker signals evaluated at the modified speaker positions or related to the target orientation, or wherein the processed sound field description comprises a plurality of loudspeaker channels for a defined output loudspeaker setup, wherein the sound field processor is configured to calculate the backwards transform rule using a loudspeaker format conversion matrix derived from the modified virtual speaker positions or related to the target orientation using the position of the virtual loudspeakers in the defined output loudspeaker setup, or wherein the processed sound field description comprises a binaural output, wherein the sound field processor is configured to calculate the binaural output signals using head-related transfer functions associated with the modified virtual speaker positions or using a loudspeaker format conversion rule related to a defined intermediate output loudspeaker setup and head-related transfer functions related to the defined output loudspeaker setup (Ben-col.6 line 30-43 & 55-67).
18. The apparatus according to claim 1, wherein the sound field processor is configured to convert the sound field description into a virtual loudspeaker related representation associated with a first set of virtual loudspeaker positions, wherein the first set of virtual loudspeaker positions is associated with the defined reference point, transform the first set of virtual loudspeaker positions into a modified set of virtual loudspeaker positions, wherein the modified set of virtual loudspeaker positions is associated with the target listening position, and convert the virtual loudspeaker related representation into the processed sound field description associated with the modified set of virtual loudspeaker positions, wherein the sound field processor is configured to calculate the modified set of virtual loudspeaker positions using the detected deviation (Ben-col.10 line 25-60/the virtual is based on define listening orientation).
19. (Original) The apparatus according to claim 4, wherein the set of the virtual positions of the virtual speakers is associated with the defined a listening orientation, and wherein the set of the modified virtual positions of the virtual speakers is associated with the target listening orientation, and wherein the target listening orientation is calculated from the detected deviation and the defined listening orientation (fig.3-4 (306); col.10 line 55-60; col.11 line 15-40/sound processor to render impression of sound for virtual speakers according to target listening position or orientation ).
21. The apparatus according to claim 1, wherein the sound field processor comprises: a time-spectrum converter for converting the sound field representation into a time- frequency domain representation (Ben-col.7 line 1-20; col.7 line 5-15 & col.10 line 30-60).
24. A method of processing a sound field representation related to a defined reference point or a defined listening orientation for the sound field representation, comprising: detecting a deviation of a target listening position from the defined reference point or of a target listening orientation from the defined listening orientation; and processing the sound field representation using the deviation to acquire a processed sound field description, wherein the processed sound field description, when rendered, provides an impression of the sound field representation at the target listening position being different from the defined reference point or for the target listening orientation being different from the defined listening orientation (fig.3-4 (306); col.10 line 55-60; col.11 line 15-40), wherein the deviation is applied to the sound field representation in relation to a spatial transform domain having associated therewith a forward transform rule and a backward transform rule (You-fig.1 (12-36); col.4 line 24-27 & col.7 line 55-67; col.9 line 15-30/the forward transform representing panning for the audio object).
25. A non-transitory digital storage medium having a computer program stored thereon to perform the method of processing a sound field representation related to a defined reference point or a defined listening orientation for the sound field representation, comprising: detecting a deviation of a target listening position from the defined reference point or of a target listening orientation from the defined listening orientation; and processing the sound field representation using the deviation to acquire a processed sound field description, wherein the processed sound field description, when rendered, provides an impression of the sound field representation at the target listening position being different from the defined reference point or for the target listening orientation being different from the defined listening orientation (Ben-fig.3-4 (306); col.10 line 55-60; col.11 line 15-40), , wherein the processed sound field description, when rendered, provides an impression of a spatially filtered sound field description, wherein the deviation or the spatial filter is applied to the sound field representation in relation to a spatial transform domain having associated therewith a forward transform rule and a backward transform rule, when said computer program is run by a computer (You-fig.1 (12-36); col.4 line 24-27 & col.7 line 55-67; col.9 line 15-30/the forward transform representing panning for the audio object).
Claim 26, Benattar disclose of An apparatus for processing a sound field representation related to a defined reference point or a defined listening orientation for the sound field representation, comprising: a sound field processor configured for processing the sound field representation using a spatial filter to acquire a processed sound field description, wherein the processed sound field description, when rendered, provides an impression of a spatially filtered sound field description, wherein the sound field processor is configured to process the sound field representation so that the spatial filter is applied to the sound field representation in relation to a spatial transform domain having associated therewith a forward transform rule configured for transforming the sound field representation from an audio signal domain into the spatial transform domain (col.10 line 55-60; col.11 line 15-40; col.14 line 45-55).
However, the art never mentioned of the filter to provide the spatial transform domain having associated therewith a forward transform rule configured for transforming the sound field representation from an audio signal domain into the spatial transform domain and a backward transform rule configured for transforming a transformed sound field representation from the spatial transform domain into the audio signal domain.
But the prior art as in You disclose of spatial transform domain having associated therewith a forward transform rule configured for transforming the sound field representation from an audio signal domain into the spatial transform domain and a backward transform rule configured for transforming a transformed sound field representation from the spatial transform domain into the audio signal domain (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30). Thus one of the ordinary skills in the art could have modified the prior art by adding such aspect related to spatial transform domain having associated therewith a forward transform rule configured for transforming the sound field representation from an audio signal domain into the spatial transform domain and a backward transform rule configured for transforming a transformed sound field representation from the spatial transform domain into the audio signal domain so as to convert the composite signal back to time domain for playback.
Similarly, the claim(s) 33-34 which in substance disclose of the similar feature as in claim(s) 26 has been analyzed and rejected accordingly.
27. (New) The apparatus according to claim 26, wherein the sound field processor is configured to process the sound field representation using the forward transform rule for the spatial transform, the forward transform rule being related to a set of virtual speakers at a set of virtual positions of virtual speakers, using the spatial filter within the spatial transform domain, and using the backward transform rule for the spatial transform using the set of the virtual positions of the virtual speakers, or detecting a deviation of a target listening position from the defined reference point or of a target listening orientation from the defined listening orientation, wherein the sound field processor is configured to process the sound field representation using the forward transform rule for the spatial transform, the forward transform rule being related to a set of virtual speakers at a set of virtual positions of the virtual speakers, using a spatial filter within the spatial transform domain; and using the backward transform rule for the spatial transform using a set of modified virtual positions of the virtual speakers derived from the set of virtual positions of the virtual speakers using the deviation (You-fig.1 (12-36); col.4 line 24-27 & col.7 line 55-67; col.9 line 15-30/the forward transform representing panning for the audio object).
28. (New) The apparatus according to claim 26, to apply the spatial filter to the virtual speaker signals to acquire filtered virtual speaker signals, and to backward transform the filtered virtual speaker signals using the backward transform rule based on the modified virtual positions of the virtual speakers related to the target listening positions or the target listening orientation or the virtual positions of the virtual speakers related to the defined reference position or listening orientation (You-fig.1 (12-36); col.4 line 24-27 & col.7 line 55-67; col.9 line 15-30/the backward transform representing panning for the audio object).
30. (New) The apparatus according to claim 26, wherein the sound field processor is configured to calculate the spatial filter as a set of window coefficients depending on virtual positions of virtual speakers used in the forward transform rule and additionally depending on at least one of the defined reference position, the defined listening orientation, the target listening position, and the target listening orientation (col.10 line 55-60; col.11 line 15-40; col.14 line 45-55/the filter is based on orientation and position and virtual speakers).
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Satongar et al. (US 10,652,687 B2).
20. (Original) The apparatus according to claim 4, but the art never mentioned as wherein the set of the virtual positions of the virtual speakers is associated with the defined listening position and the defined listening orientation, wherein the defined listening position corresponds to a first projection point and projection orientation of an associated video resulting in a first projection of the associated video on a display area representing a projection surface, and wherein the set of the modified virtual positions of the virtual speakers is associated with a second projection point and a second projection orientation of the associated video resulting in a second projection of the associated video on the display area corresponding to the projection surface.
However, Satongar et al. disclose of the similar concept related to a the virtual positions of the virtual speakers is associated with the defined listening position and the defined listening orientation, wherein the defined listening position corresponds to a first projection point and projection orientation of an associated video resulting in a first projection of the associated video on a display area representing a projection surface, and wherein the set of the modified virtual positions of the virtual speakers is associated with a second projection point and a second projection orientation of the associated video resulting in a second projection of the associated video on the display area corresponding to the projection surface (fig.4 (400); col.6 line 5-55). Thus, one of the ordinary skills in the art could have modified the art by adding such aspect related to virtual positions of the virtual speakers is associated with the defined listening position and the defined listening orientation, wherein the defined listening position corresponds to a first projection point and projection orientation of an associated video resulting in a first projection of the associated video on a display area representing a projection surface, and wherein the set of the modified virtual positions of the virtual speakers is associated with a second projection point and a second projection orientation of the associated video resulting in a second projection of the associated video on the display area corresponding to the projection surface so as to maintain virtual sound effect dependent on user’s position.
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Visser et al. (US 8,965,546 B2).
11. The apparatus according to claim 1, but the art never specify as wherein the sound field processor is configured for processing the sound field representation using deviation and wherein the sound field processor is configured for processing the sound field representation using a spatial filter to acquire a processed sound field description and field representation so that the spatial filter is applied to the sound field representation in relation to the spatial transform domain.
However, it shall be observed that Visser et al. disclose of similar aspect related to field processor is configured for processing the sound field representation using deviation and wherein the sound field processor is configured for processing the sound field representation using a spatial filter to acquire a processed sound field description and field representation so that the spatial filter is applied to the sound field representation in relation to the spatial transform domain (fig.13A ; col.9 line 55-67; col.12 line 5-17). Thus, one of the ordinary skills in the art could have modified the art by adding such noted aspect related to a processor is configured for processing the sound field representation using deviation and wherein the sound field processor is configured for processing the sound field representation using a spatial filter to acquire a processed sound field description and field representation so that the spatial filter is applied to the sound field representation in relation to the spatial transform domain so as to provide the spalized sound as per user’s direction.
12. (Currently Amended) The apparatus according to claim [[1]]11, wherein the sound field processor is configured to calculate the spatial filter as a set of window coefficients depending on virtual positions of virtual speakers used in the forward transform rule and additionally depending on at least one of the defined reference position, the defined listening orientation, the target listening position, and the target listening orientation (You-fig.1 (12-36); col.4 line 24-27 & col.9 line 15-30). or wherein the sound field processor is configured to calculate the spatial filter as a set of non-negative real valued gain values, so that a spatial sound is emphasized towards a look direction indicated by the target listening orientation, or wherein the sound field processor is configured to calculate the spatial filter as a spatial window.
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Visser et al. (US 8,965,546 B2) and Lee et al. (US 10,939,039 B2).
13. The apparatus according to claim 1, however, lacking is related to wherein the sound field processor is configured to calculate the spatial filter as a common first-order spatial window directed towards a target look direction or as a common first-order spatial window being attenuated or amplified according to a distance between the target listening position and a corresponding virtual loudspeaker position, or as a rectangular spatial window becoming narrower in case of a zooming-in operation or becoming broader in case of a zooming-out operation, or as a window that attenuates sound sources at a side when a corresponding audio object disappears from a zoomed video image.
But it shall be noted Lee et al. disclose of the similar concept related to sound field processor is configured to calculate the spatial filter as a common first-order spatial window directed towards a target look direction or as a common first-order spatial window being attenuated or amplified according to a distance between the target listening position and a corresponding virtual loudspeaker position, or as a rectangular spatial window becoming narrower in case of a zooming-in operation or becoming broader in case of a zooming-out operation, or as a window that attenuates sound sources at a side when a corresponding audio object disappears from a zoomed video image (fig.5-8; col.12 line 25-65). Thus, one of the ordinary skills in the art could have modified the art by adding the noted sound field processor is configured to calculate the spatial filter wherein a window that attenuates sound sources at a side when a corresponding audio object disappears from a zoomed video image so as to enhance sound quality related to sound image.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Park (US 7,428,310 B2).
15. The apparatus according to claim 1, but the prior art never specify as wherein the apparatus comprises a memory comprising stored sets of pre- calculated coefficients associated with different predefined deviations, and wherein the sound field processor is configured to search, among the different predefined deviations, for the predefined deviation being closest to the detected deviation, to retrieve, from the memory, the pre-calculated set of coefficients associated with the closest predetermined deviation, and to forward the retrieved pre-calculated set of coefficients to the sound field processor.
But, Park disclose of similar aspect including apparatus comprises a memory comprising stored sets of pre- calculated coefficients associated with different predefined deviations, and wherein the sound field processor is configured to search, among the different predefined deviations, for the predefined deviation being closest to the detected deviation, to retrieve, from the memory, the pre-calculated set of coefficients associated with the closest predetermined deviation, and to forward the retrieved pre-calculated set of coefficients to the sound field processor (fig.1 (10/60); fig.3; col.40 line 60 & col.5 line 40). Thus, one of the ordinary skills in the art could have modified the apparatus by adding the noted memory comprising stored sets of pre- calculated coefficients associated with different predefined deviations, and wherein the sound field processor is configured to search, among the different predefined deviations, for the predefined deviation being closest to the detected deviation, to retrieve, from the memory, the pre-calculated set of coefficients associated with the closest predetermined deviation, and to forward the retrieved pre-calculated set of coefficients to the sound field processor so as to adjust the output device based on particular grid which contained the deviation parameter.
Claim(s) 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Freeman (US 8,401,210 B2).
Claim 16. The apparatus according to claim 2, although the art lacked as wherein the sound field representation is associated with a three dimensional video or spherical video and the defined reference point is a center of the three dimensional video or the spherical video, wherein the detector is configured to detect a user input indicating an actual viewing point being different from the center, the actual viewing point being identical to the target listening position, and wherein the detector is configured to derive the detected deviation from the user input, or wherein the detector is configured to detect a user input indicating an actual viewing orientation being different from the defined listening orientation directed to the center, the actual viewing orientation being identical to the target listening orientation, and wherein the detector is configured to derive the detected deviation from the user input.
But the prior art herein disclose of such field representation is associated with a three dimensional video or spherical video and the defined reference point is a center of the three dimensional video or the spherical video, wherein the detector is configured to detect a user input indicating an actual viewing point being different from the center, the actual viewing point being identical to the target listening position, and wherein the detector is configured to derive the detected deviation from the user input, or wherein the detector is configured to detect a user input indicating an actual viewing orientation being different from the defined listening orientation directed to the center, the actual viewing orientation being identical to the target listening orientation, and wherein the detector is configured to derive the detected deviation from the user input (fig.1; 4; col.4 line 35-67). Thus, one of the ordinary skills in the art could have modified the art by adding the noted field representation is associated with a three-dimensional video or spherical video and the defined reference point is a center of the three-dimensional video or the spherical video, wherein the detector is configured to detect a user input indicating an actual viewing point being different from the center for dynamically changing the audio as per tracking the position of the user.
17. The apparatus according to claim 1, However, the prior art lacked of the specific as wherein the sound field representation is associated with a three dimensional video or spherical video and the defined reference point is a center of the three dimensional video or the spherical video, wherein the sound field processor is configured to process the sound field representation so that the processed sound field representation represents a standard or little planet projection or a transition between the standard or the little planet projection of at least one sound object comprised by the sound field description with respect to a display area for the three dimensional video or the spherical video, the display area being defined by the user input and a defined viewing direction.
But the prior art herein disclose of such sound field representation is associated with a three dimensional video or spherical video and the defined reference point is a center of the three dimensional video or the spherical video, wherein the sound field processor is configured to process the sound field representation so that the processed sound field representation represents a standard or little planet projection or a transition between the standard or the little planet projection of at least one sound object comprised by the sound field description with respect to a display area for the three dimensional video or the spherical video, the display area being defined by the user input and a defined viewing direction (fig.1; 4; col.4 line 35-67). Thus, one of the ordinary skills in the art could have modified the art by adding the noted field such sound field representation is associated with a three dimensional video or spherical video and the defined reference point is a center of the three dimensional video or the spherical video, wherein the sound field processor is configured to process the sound field representation so that the processed sound field representation represents a standard or little planet projection or a transition between the standard or the little planet projection of at least one sound object comprised by the sound field description with respect to a display area for the three dimensional video or the spherical video, the display area being defined by the user input and a defined viewing direction for dynamically changing the audio as per tracking the position of the user as configured by user.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Morrell et al. (US 9,536,531 B2).
22. The apparatus according to claim 1, but lacking the specific regarding wherein the sound field representation is an Ambisonics signal comprising an input order, wherein the processed sound field description is an Ambisonics signal comprising an output order, and wherein the sound field processor is configured to calculate the processed sound field description so that the output order is equal to the input order or wherein the sound field processor is configured to acquire a processing matrix associated with the deviation and to apply the processing matrix to the sound field representation, and wherein the sound field representation comprises at least two sound field components, and wherein the processing matrix is a NxN matrix, where N is equal to two or is greater than two, or wherein the sound field processor is configured for processing the sound field representation so that a loudness of a sound object or a spatial region represented by the processed sound field description is greater than a loudness of the sound object or the spatial region represented by the sound field representation, when the target listening position is closer to the sound object or the spatial region than the defined reference point.
However, Morrell et al. disclose of processing sound including sound field representation is an Ambisonics signal comprising an input order, wherein the processed sound field description is an Ambisonics signal comprising an output order, and wherein the sound field processor is configured to calculate the processed sound field description so that the output order is equal to the input order (col.8 line 40-67). Thus, one of the ordinary skills in the art could have modified the prior art by adding the noted processing sound including sound field representation is an Ambisonics signal comprising an input order, wherein the processed sound field description is an Ambisonics signal comprising an output order, and wherein the sound field processor is configured to calculate the processed sound field description so that the output order is equal to the input order so as to render surround sound independent of input locations.
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Magariyachi et al. (US 10,582,329 B2).
23. The apparatus according to claim 1, although lacking is related to aspect as wherein the sound field processor is configured to determine, for each virtual speaker, a separate direction with respect to the defined reference point; perform an inverse spherical harmonic decomposition with the sound field representation by evaluating spherical harmonic functions at the determined directions; determine modified directions from the virtual loudspeaker positions to the target listening position; and perform a spherical harmonic decomposition using the spherical harmonic functions evaluated at the modified virtual loudspeaker positions.
However, noted Magariyachi et al. disclose of such similar inverse spherical harmonic decomposition related to sound field representation including sound field processor is configured to determine, for each virtual speaker, a separate direction with respect to the defined reference point; perform an inverse spherical harmonic decomposition with the sound field representation by evaluating spherical harmonic functions at the determined directions; determine modified directions from the virtual loudspeaker positions to the target listening position; and perform a spherical harmonic decomposition using the spherical harmonic functions evaluated at the modified virtual loudspeaker positions (fig.15; 17; 20; col.2 line 40-55; col.6 line 30-65). Thus, one of the ordinary skills in the art could have modified the prior art by adding the noted inverse spherical harmonic decomposition related to sound field representation including sound field processor is configured to determine, for each virtual speaker so as to supply the drive supply to output the virtual speakers.
Claim(s) 29, 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Zhang et al. (US 9,288,577 B2).
29. (New) The apparatus according to claim 26, but the art never specify as wherein the spatial filter comprises a matrix.
However, zhang et al. disclose of the similar spatial filter comprises a matrix (fig.6-7 (130); col.6 line 20-35). Thus, one of the ordinary skills in the art could have modified the art by adding such noted spatial filter comprises a matrix so as to determine the direction of arrival of sound and thus reduce unwanted signals.
31. (New) The apparatus according to claim 26, although, the art never mentioned as wherein the sound field processor is configured to calculate the spatial filter as a set of non-negative real valued gain values, so that a spatial sound is emphasized towards a look direction indicated by the target listening orientation, or wherein the sound field processor is configured to calculate the spatial filter as a spatial window.
However, zhang et al. disclose of the similar sound field processor is configured to calculate the spatial filter as a set of non-negative real valued gain values, so that a spatial sound is emphasized (fig.6-7 (130); col.6 line 20-35). Thus, one of the ordinary skills in the art could have modified the art by adding such with user orientation by adding such specific wherein processor is configured to calculate the spatial filter as a set of non-negative real valued gain values, so that a spatial sound is emphasized so as to determine the direction of arrival of sound and thus reduce unwanted signals.
Claim(s) 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Benattar (US 11,330,388 B2) and You (US 9,794,688 B2) and Franck (US 10,187,741 B1).
32. (New) The apparatus according to claim 1, but the prior art never mentioned as wherein the sound field processor is configured to calculate the spatial filter as a common first-order spatial window directed towards a target look direction or as a common first-order spatial window being attenuated or amplified according to a distance between the target listening position and a corresponding virtual speaker position, or as a rectangular spatial window becoming narrower in case of a zooming-in operation or becoming broader in case of a zooming-out operation, or as a window that attenuates sound sources at a side when a corresponding audio object disappears from a zoomed video image.
But franck disclose of the similar concept related to a processor is configured to calculate the spatial filter as a common first-order spatial window directed towards a target look direction or as a common first-order spatial window being attenuated or amplified according to a distance between the target listening position and a corresponding virtual speaker position, or as a rectangular spatial window becoming narrower in case of a zooming-in operation or becoming broader in case of a zooming-out operation, or as a window that attenuates sound sources at a side when a corresponding audio object disappears from a zoomed video image (fig.2 & fig.5A; col.9 line 20-40). Thus, one of the ordinary skills in the art could have modified the art by adding such noted processor is configured to calculate the spatial filter as a common first-order spatial window directed towards a target look direction so as to provide the desired sound effect and avoid noise and interference.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DISLER PAUL whose telephone number is (571)270-1187. The examiner can normally be reached 9:00-6:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chin, Vivian can be reached on (571) 272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DISLER PAUL/Primary Examiner, Art Unit 2654