DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the claim amendment filed on December 9, 2025 and wherein claims 12-17 amended.
In virtue of this communication, claims 1-17 are currently pending in this Office Action.
With respect to the non-statutory obvious-type double patenting rejection of claims 1-17 as being unpatentable over conflicting claims 1-5, 8-11 of U.S. Patent No. 11,967,329 B2, as set forth in the previous office action, the Terminal Disclaimer filed on December 9, 2025 has been approved and argument, see paragraph 4 of page 11 in Remarks filed on December 9, 2025 has been fully considered and the argument found persuasive and therefore, the non-statutory double patenting rejection of claims 1-17 as being unpatentable over conflicting claims 1-17 of U.S. Patent No. 11,967,329 B2, as set forth in the previous office action, has been withdrawn.
With respect to the rejection of claims 12-17 under 35 USC §101, as set forth in the previous Office Action, the Applicant’s amendment, and argument, see paragraph 2 of page 7 in Remarks filed on December 9, 2025, have been fully considered and the argument is persuasive. Therefore, the rejection of claims 12-17 under 35 USC §101, as set forth in the previous Office Action, has been withdrawn.
The Office appreciates the explanation of the amendment and analyses of the prior arts, and however, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 8, 12, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Xiang et al. (US 20140355796 A1, hereinafter Xiang) and in view of references ISOPart3 (“Information Technology, High Efficiency Coding and Media Delivery in Heterogeneous Environments, Part 3, 3D audio”, ISO/IEC JTC 1/SC 29, 23008-3:201x(E), October 12, 2016, IDS, p.797) and Ehara et al (US 20190246236 A1, hereinafter Ehara).
Claim 1: Xiang teaches an audio decoding device (title and abstract, ln 1-15, an audio signal decoder in fig. 1) in an extended reality XR headset (bitstream 120 into audio playback device in fig. 7, and the audio playback device can be a headset having headphones, para 64, and realizing generated virtual loudspeaker feed, para 66, i.e., VR headset, as the XR headset), comprising: a memory (a buffer, para 129), the audio decoding device comprising:
a memory (audio playback device 100 in fig. 7, as to a headset, para 64 and an extraction unit 104 as an audio decoder, para 62, and thus, a storage storing received bitstream 120 for decoding and extracting SHCs is inherency); and one or more processors (one or more processors, para 113) configured to store at least portion of a coded audio bitstream (portion of the bitstream 120 and processed by the audio decoder for extracting the audio signal representations, and discussed above) configured to:
decode, based on the coded audio bitstream, a representation of a soundfield (extracted or decoded SHC from the extraction unit 104 in fig. 7, para 62, and represents sound field, para 23), a representation of a soundfield having multiple degrees of freedom (decoded SHC or HOA 122 in fig. 7, and at least 3D coordinate, para 29);
decode, based on the coded audio bitstream, metadata (metadata encoded at au audio encoder, and containing location of the objects, para 22 and the location objects
PNG
media_image1.png
22
75
media_image1.png
Greyscale
, para 28);
render, by multiple degree of freedom audio renderer (via binaural rendering unit 102 in fig. 7, using BRIR and conditionalized BRIR to render HOA or SHC, para 67, and the HOA or SHC representing soundfield at least 3D coordinate, para 29) and selectively using reverb (via BRIR conditioning unit 106 to provide HRTF, early reflection, and residual room segments in fig. 7, para 66) and using a particular room reverb coefficient set (BRIR outputted from filters 108 in fig. 7), speaker feeds from a soundfield (from SHC or HOA coefficients 122 as input and output 136A/136B in fig. 7, para 64), wherein the XR headset includes a plurality of speakers driven via the rendered speaker feeds (headphones in the headset, para 64).
However, Xiang does not explicitly teach wherein the soundfield has multiple degrees of freedom, and wherein the disclosed one or more processors are also configured to
decode, based on the coded audio bitstream, a first syntax element indicating whether reverb is enabled or disabled;
responsive to the first syntax element indicating that reverb is enabled:
decode, based on the coded audio bitstream, a plurality of room reverb coefficient sets for a room, each of the room reverb coefficient sets corresponding to a different candidate position in the room; and
select, based on data generated by one or more sensors of the XR headset, a particular room reverb coefficient set from the plurality of room reverb coefficient sets that corresponds to a position of the XR headset.
ISOPart3 teaches an analogous field of endeavor by disclosing an audio decoding device (e.g., decoder described in section 4.1 Decoder block diagram, start from p.2) and coded audio bitstream is disclosed (bitstream as input to MPEG-H 3D Audio Core Decoder for at least SAOC and HOA side info, i.e., as decoded metadata in fig. 1, p.3), wherein the soundfield has multiple degrees of freedom (MPEG-H 3D audio core encoder in fig. C.1, para 651, and decoder in fig. 1, p.3, and e.g., Loudness compensation in MPEG-H 3DA, session 15.5 Loudness Compensation after Gain Interactivity, p.494, and 3DoF audio rendering is inherently included in the MPEG-H 3D Audio standard), and one or more processor is disclosed (element metadata preprocessor, session 18.1 Element Metadata Preprocessing, p.528, and session 18.11 Diffuseness Rendering, p.547) to
decode, based on the coded audio bitstream, a first syntax element (flagHRIR in Syntax of frequency domain binuarual renderer parameters FdBinauralRendererParam, p.502) indicating whether reverb is enabled or disabled (sparse reverberator is off if flagHrir=1, else sparse reverberator is on if flagHrir=0, table 257, p.506);
responsive to the first syntax element indicating that reverb is enabled:
decode, based on the coded audio bitstream, a plurality of room reverb coefficient sets for a room (represented by filter coefficient set bsFirCoefLeft[pos][i] and bsFirCoefRight[pos][i] in table 250 syntax of BinauralFirData, p.502, and pos is virtual loudspeaker positions from 0 to nBrirPairs-1, p.506 and i is index of filter coefficient representing BRIR coefficients in table 250, p.502 and RT60[k], k is late reverberation analysis bands, p.507), each of the room reverb coefficient sets corresponding to a different candidate position in the room (bsFirCoefLeft[pos][i] and bsFirCoefRight[pos][i] for specific loudspeaker position pos and the coefficient set is from i=0 to bsNumCoefs, table 250, p.502); and
select, based on data generated by one or more sensors (tracking device to track the user’s head and the scene displacement angles, session 17.9.1 Introduction, p.515 and represented by syntax useTrackingMode flag in BinauralRendering interface in table 249, session 17.4.2 Syntax of Binaural Renderer Interface, p.501, and defined in session 17.4.3. Semantics, p.504) of the XR headset (represented by φvirtual,left and φvirtual,right by angle displacement φoffset, added to the original φorig, of OAM from the metadata, p.530-531), coefficients (gains and angles are offset from and updated to the original defined by OAM of the metadata, session 18.1 Element Metadata Preprocessing, para 3 of p.531) for benefits of achieving a virtual space soundfield by headphones (WIRE interactivity, p.511 and section 17.4.1, Introduction, p.500-501).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the soundfield has the multiple degrees of freedom and wherein the first syntax element indicating whether reverb is enabled or disabled and wherein decoding, based on the coded audio bitstream, the first syntax element indicating whether the reverb is enabled or disabled; responsive to the first syntax element indicating that the reverb is enabled: decode, based on the coded audio bitstream, the plurality of room reverb coefficient sets for the room, each of the room reverb coefficient sets corresponding to the different candidate position in the room; and select, based on data generated by one or more sensors of the XR headset, coefficients, as taught by IsoPart3, to the soundfield and decode, based on the coded audio bitstream, metadata by the audio decoding device, as taught by Xiang, for the benefits discussed above.
However, the combination of Xiang and ISOPart3 does not explicitly teach wherein select, based on the disclosed data generated by one or more sensors of the XR headset, a particular room reverb coefficient set from the plurality of room reverb coefficient sets that corresponds to a position of the XR headset.
Ehara teaches an analogous field of endeavor by disclosing an audio decoder (title and abstract, ln 1-12 and method steps implemented in binaural renderer core in fig. 7, and the signal including metadata from decoded bitstream, para 26) in an extended reality XR headset (in a HMD in VR application, para 33 and such as virtual/augmented reality of headphones, para 4, i.e., extended reality headset) and wherein select, based on data generated by one or more sensors of the XR headset (instant head-relative source positions based on at least user head tracking data from head-tracking enabled headmounted device, para 10, via head-relative source position computation 310 in fig. 3), selecting a particular room reverb coefficient set (via BRIR selection in the binaural renderer core and through the summation across sources 701, and diffuse block processing 703, based on the at least calculated instant head relative source positions when the head tracking is enabled to measure instant user head facing direction or position, para 31, 38, in fig. 7) from the plurality of room reverb coefficient sets (from the parameterized BRIR frames in direct block and remaining diffuse blocks and from a BRIR database or interpolation of the BRIR in figs. 3, 7) that corresponds to a position of the XR headset (head rotation and movement via the enabled head tracking, including user head rotation/movement in figs. 3, 7, para 31, 38and selected BRIR frames in direct blocks applied to the element 701 and the selected BRIR diffuse blocks applied to element diffuse block processing 703 through BRIR selection block, downmix 702, and diffuse block processing late reverberation processing 703 in fig. 7, para 59) for benefits of improving immersive audio scene perception (by providing high spatial resolution and a low computational complexity, para 8, 27, by convolution of separated and selected BRIR into direct and diffuse portions frame by frame according to the moving source, para 35).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied selecting, based on the data generated by one or more sensors of the XR headset, the particular room reverb coefficient set from the plurality of room reverb coefficient sets that corresponds to a position of the XR headset,
as taught by Ehara, to selecting, based on data generated by one or more sensors of the XR headset, as taught by the combination of Xiang and ISOPart3, for the benefits discussed above.
Claim 8 recited an audio encoding device and operated in an opposite direction of the audio decoding device as recited in claim 1 (Xiang, an audio encoder represented by content creator 22 in fig. 3 and ISOpart3, SAOC encoder, object metadata codec, p.5 and fig. C.1, 3D audio encoder in fig. 1.C, p.651, and Ehara, BRIRs associated with production scene and such information is in decoded bitstream to be received, para 26) and thus, rejected according to claim 1 above.
Claim 12 has been analyzed and rejected according to claim 1 above.
Claim 4: the combination of Xiang, ISOPart3, and Ehara further teaches wherein the XR headset further comprises a display on a wearer of the XR headset (Xiang, virtual speakers realized by headset having headphones, para 64, and ISOPart3, screen is defined by hasNonStandardScreenSize with certain of assembled angles, p.490, Ehara, HMD in virtual reality VR application, para 33), except explicitly teaching wherein video outputted by the display to the wearer of the XR headset.
An Official Notice applied in the previous office action is the admitted prior art because the applicant failed to traverse the Office’s assertion of the office notice and has been retaken: the video outputted by the display of the HMD to the wearer of the XR headset as the HMD is notoriously well-known in the art for practicing an immersive and personal cinematic experience by filing the user’s entire field of view and creating a strong sense of spatial presence and fully engaged in video’s environment.
Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the video outputted by the display to the wearer of the XR headset, as taught by well-known in the art, to the display of the XR headset, as taught by the combination of Xiang, ISOPart3, and Ehara, for benefits discussed above.
Claim 15 has been analyzed and rejected according to claim 12, 4 above.
Claims 2-3, 9-10, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Xiang (above) and in view of references ISOPart3 (above), Ehara (above), and Kraemer et al. (US 20130202129 A1, hereinafter Kraemer).
Claim 2: the combination of Xiang, ISOPart3, and Ehara further teaches wherein the one or more processors are further configured to decode, based on the coded audio stream, other syntax element (ISOPart3, e.g., a center frequency of a late reverberation analysis bands fc,ana, QTDL gain gki,m,real, gki,m,imag, in complex domain, session 13.2.3.1 Introduction, p.432), according to claim 1 above, except explicitly disclosing that other syntax element also includes a second syntax element indicating whether doppler is enabled or disabled, wherein the one or more processors are further configured to render the speaker feeds selectively using doppler based on the second syntax element indicating whether doppler is enabled or disabled.
Kraemer teaches an analogous field of endeavor by disclosing an audio decoding device (title and abstract, ln 1-15 and an audio signal decoder in a user system 140 of fig. 1A/1B, para 33) and wherein a second syntax element is disclosed (an index element ENABLE_DOPPLER in table 1, para 62) and one or more processors are disclosed (multiple processors or processor cores, para 109) configured to decode, based on the coded audio bitstream (through streaming module 120A in fig. 1A/1B and transmitted from the encoder in the 110A to the decoder in the 140 in figs. 1A/1B), the second syntax element indicating whether doppler is enabled or disabled (object metadata includes object attributes listed in table 1, para 62; and element 112A encodes the audio objects together with associated attribute metadata, para 29; the rendered 142A of the 140 decodes the encoded audio objects for rendering via one or more loudspeakers, para 33), wherein the one or more processors are further configured to render the speaker feeds selectively using doppler based on the second syntax element indicating whether doppler is enabled or disabled (including index ENABLE_DOPPLER in table 1 and according to the index, to perform or not to perform doppler process, para 62; audio signals 244 to the speakers 250 in fig. 2) for benefits of enhancing 3D sound rendering in a more accurate manner by considering attributes of sound sources such as location, velocity, directivity, etc. (para 3-4).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the one or more processors in the audio decoding device and configured to decode, based on the coded audio bitstream, the second syntax element indicating whether doppler is enabled or disabled, wherein the one or more processors are further configured to render the speaker feeds selectively using doppler based on the second syntax element indicating whether doppler is enabled or disabled, as taught by Kraemer, to decoding, based on the coded audio bitstream, the other syntax element in the audio decoding device, as taught by the combination of Xiang, ISOPart3, and Ehara, for the benefits discussed above.
Claim 3: the combination of Xiang, ISOPart3, Ehara, and Kraemer further teaches, according to claim 2 above, wherein the one or more processors are further configured to:
decode, based on the coded audio bitstream (Xiang, best to try discussed above, and Kramer, decoding the encoded object metadata including the object attributes in table 1, para 62), a third syntax element indicating whether occlusion is enabled or disabled (Kraemer, attribute ENABLE_OBSTRUCTION in table 1 continued, para 62), wherein the one or more processors are further configured to render the speaker feeds selectively using occlusion based on the third syntax element indicating whether occlusion is enabled or disabled (Kraemer, the obstruction listed in tables 2/3 and depending on how the sound sources are occluded or blocked, para 62).
Claim 9 has been analyzed and rejected according to claims 8, 2 above.
Claim 10 has been analyzed and rejected according to claims 9, 3 above.
Claim 13 has been analyzed and rejected according to claims 12, 2 above.
Claim 14 has been analyzed and rejected according to claims 13, 3 above.
Claims 5-7, 11, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Xiang (above) and in view of references ISOPart3 (above), Ehara (above), and Terentiv et al (WO 2019197404 A1, equivalent to US 20210168550 A1 to be applied hereinafter, hereinafter Terentiv).
Claim 5: the combination of Xiang, ISOPart3, and Ehara further teaches, the soundfield has multiple degree directions (Xiang, soundfields in different directions defined by different orders in HOA and FOA in fig. 1 and ISOPart3, HOA and objects encoded by a 3D audio encoder in fig. C.1, p.651 and decoded by 3D audio decoder in fig. 1, p.3).
However, the combination of Xiang, ISOPart3, and Ehara does not explicitly teach wherein the soundfield has six degrees of freedom, and wherein the multiple degree of freedom audio renderer comprises a six degree freedom audio renderer.
Terentiv teaches an analogous field of endeavor by disclosing an audio decoding device (title and abstract, ln 1-10 and fig. the right side of fig. 1, including decoder 104, 3D audio renderer 105, proponent renderer extensions 107, etc.) and wherein a soundfield is disclosed (soundfield in fig. 2) to have six degrees of freedom (a 6DoF acoustic scene realized in fig. 2, and in defined MPEG-I data structure in fig. 3 and MPEG-H 3DA includes 3DoF soundfield, para 4) and wherein a multiple degree of freedom audio renderer comprises a six degree freedom audio renderer (an six degree freedom audio renderer in figs. 4A/4B and decoding method in fig. 9, to receive 3DoF and 6DoF bitstream at S901, and while 6DoF renderer is activated, render 6DoF audio based on approximated/restored audio signals and listener position at S907, etc., in fig. 9) for benefits of improving an efficiency of rich soundfield scene in a flexible manner (using 3DoF audio signal to achieve 6DoF soundfield by reducing bitrate requirement, para 69, and providing backwards compatibility to 3DoF audio decoding and rendering, para 136).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the soundfield having six degrees of freedom 6DoF, and wherein the multiple degree of freedom audio renderer comprises a six degree freedom audio renderer, as taught by Terentiv, to the sound field and the coded audio bitstream in the audio decoding device, as taught by the combination of Xiang, ISOPart3, and Ehara, for the benefits discussed above.
Claim 6 has been analyzed and rejected according to claims, 5, 1 above and the combination of Xiang, ISOPart3, Ehara, and Terentiv further teaches the sound field has three degrees of freedom, and wherein the multiple degree of freedom audio renderer comprises a three degree of freedom audio renderer (Xiang, soundfields in different directions defined by different orders in HOA and FOA in fig. 1 and ISOPart3, HOA and objects encoded by a 3D audio encoder in fig. C.1, p.651 and decoded by 3D audio decoder in fig. 1, p.3 and Terentiv, 3DoF audio signal rendered at S904 in fig. 9 and supporting MPEG-H 3DA data and MPEG-I data in fig. 3).
Claim 7: the combination of Xiang, ISOPart3, Ehara, and Terentiv further teaches wherein the multiple degree of freedom audio renderer comprises a metadata interface that is configured ot receive the first syntax element (Xiang, metadata containing locations coordinates of the object-based audio, para 22, ISOPart3, the syntax flagHrir and metadata interface for HOA, session 17.10.6 Audio PCM data, p.528, and including flag_spread, flag_spread_depth, flag_spread_height, flag_spread_width, session Definition of dynamic_object_metadata payloads, p.230-231 and reverb syntax flaghrir in BinauralRendering configuration, session 17.4.2 Syntax of Binaural Renderer Interface, p.501-506, and Ehara, metadata modified by user head tracking data, para 8, and the discussion in claim 1 above, and Terentiv, metadata associated with 6DoF audio rendering information, abstract).
Claim 11: the combination of Xiang, ISOPart3, Ehara, and Terentiv further teaches wherein the one or more processors are further configured to: generate, based on signals generated by one or more microphones (Xiang, a microphone array, para 28, and configured to capture the soundfield and ISOPart3, using virtual microphones to capture soundfield from directions in fig. 74, p.418, and Terentiv, soundfield recorded as 3DoF and generating and encoding metadata associated with 6DoF audio rendering into bitstream, abstract).
Claim 16 has been analyzed and rejected according to claims 12, 5 above.
Claim 17 has been analyzed and rejected according to claims 12, 7 above.
Response to Arguments
Applicant's arguments filed on December 9, 2025 have been fully considered but they are not persuasive:
with respect to the prior art rejection of independent claim 1 under 35 USC §103(a), as set forth in the Office Action, the Applicant challenged prior art Ehara and argued “Ehara does not teach measuring where the headset is located within the physical or virtual room, e.g., center of the room vs. a corner to select a different room reverb set” because Ehara “computing instant head-relative positions of the audio sources with respect to the position of user head and facing direction ([0008], [0009]”), i.e., “calculates the position of the source relative to the head to maintain spatial stability of the object”, etc., as asserted in paragraphs 1-6 of page 8 in Remarks filed on December 9, 2025.
In response to the argument above, the Office respectively disagrees because Ehara clearly teach not only “calculation” of “source position”, but also teach such “calculation” is based on Ehara’s enabled “head tracking (para 31)”, including measurement of user head rotation while positions of audio sources are expected to be invariant relative to the rotation of the user’s head (para 33), e.g., direction of y-axis indicates the user facing direction and the sources are plotted according to their two-dimensional head-relative positions computed from 301 with respect to the user (para 45), i.e., the selection of the reverb is based on the measurement of the listener’s head position through the calculation of the source positions, and therefore, in lack of citation of how the selected “particular room reverb coefficient set” is relied on “a position of the XR headset”, the Ehara’s disclosure above anticipated the broadly claimed and argued feature above, but applicant is in silence and the argument above is not persuasive.
Applicant further argued about claimed feature “a particular room reverb coefficient set” from “plurality of room reverb coefficient sets” for specific positions” and from which “a specific set is selected based on the headset position”, but Ehara states “diffuse blocks contain less directional information, they will be used in the late reverberation processing module 703 in fig. FIG. 7 which processes a downmix version of the source signals …” and “for each diffuse block w, due to that a downmix processing … is applied on the source signals, the late reverberation processing only needs to be performed once”, etc., as asserted in paragraphs 1-3 of page 9 in Remarks filed on December 9, 2025.
In response to argument above, the Office further disagrees because claimed “particular “ is referred to “selection” of the “room reverb coefficient set” and such selection is specifically relied on “a position of the XR headset”, other than argued “specific” to processing of the audio signal, and claim broadly recited “render” “selectively using reverb” based on “using” the selected “particular room reverb coefficient set”, with no recitation how “particular” or “specific” is for “processing” of whether “once” or not and thus, from this point, Ehara does not teach away from broadly claimed subject matters. Even though this “once” the applicant argued above, applicant appears to misinterpret Ehara’s “once” in time of frames, but Ehara’s “once” is clearly disclosed to be referred to number of sources, i.e., computing reverberation once regardless of how many sound resources by taking advantage of “downmix” (compared to conventional separate of computation for K number of source signals, para 63), other than argued “once” regardless of listener’s or head position and thus, the argument above is also not persuasive.
Applicant further challenged claimed feature “different candidate positions in the room”, and argued that claim claimed “a user moves from the center of a room to a corner, … regardless of where the audio source is located relative to the head”, etc., as asserted in paragraphs 4-5 of page 9 and paragraphs 1-2 of page 10 in Remarks filed on December 9, 2025,
In response to the argument above, the Office further disagrees because
(1) although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145, and therefore, the argued “regardless of where the audio source is located relative to the head”, etc., would not be properly read into claim, and in fact, the Office did not find any support from the application specification,
(2) Ehara’s disclosure matters relative location or rotation of user to the sound sources (via enabled head tracking, para 31) because such head tracking is essential for Ehara to select a set of BRIR from the BRIR database (discussed above). Applicant appears to intended to argued how difference of prior art from the application disclosure, about audio signal processing, but for the audio signal processing, claim broadly recited “render” “selectively using reverb” that is based on “syntax” and based on the selected “particular room reverb coefficient set” and then “speaker feeds”, with no recitation of the argued “regardless of where the audio source is located relative to the head” or not etc., and
(3) claimed broadly recited “a different candidate position in the room” with respect to “each of the room reverb coefficient sets”, but claim failed to recite “different candidate position in the room” is about source position or listener’s position (there is no disclosure in the application specification of whether it is referred to listener’s position or sound source position, see para [68], [73], [114], [119], etc., in application USPGPub 20240274141 A1), and as discussed in the office action above, ISOParat3 teaches this relationship of reverb coefficient set (represented by filter coefficients and position of the loudspeaker represented by “pos”), which would anticipate the broadly recited “different candidate position in the room” and thus, the argument above is also not persuasive.
Applicant further challenged other prior arts by stating “neither Xiang nor ISO part 3 Cures the Deficiency of Ehara” and however, because Ehara teaches what claimed and argued matters are, as discussed above, and thus, other prior arts do not have to teach the features Ehara has taught, and thus, the argument about other prior arts is also not persuasive.
In the response to this office action, the Office respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached on Monday-Friday 6:30amp-4:00pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LESHUI ZHANG/
Primary Examiner, Art Unit 2695