DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the claim amendment filed on February 5, 2026 and wherein claims 1, 5-7, 11, 13, 17-19, 20 amended and claims 3-4, 8, 15-16 cancelled.
In virtue of this communication, claims 1-2, 5-7, 9-14, 17-20 are currently pending in this Office Action.
The Office appreciates the explanation of the amendment and analyses of the prior arts, and however, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-2, 5-7, 9-11, 13-14, 17-20 is rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 20200162833 A1, hereinafter Lee) and in view of reference Family et al. (US 20200367009 A1, hereinafter Family) and Jot et al. (US 6188769 B1, hereinafter Jot).
Claim 1: Lee teaches a method for spatial audio rendering (title and abstract, ln 1-19, method steps in fig. 2 and implemented in an audio play apparatus in fig. 3), comprising:
determining a parameter (outputted and modified metadata and configuration info from metadata and interface data processor 204, similar to 304 in fig. 4, and used by rendering renderer 202, and paramL and paramR outputted from a parameterization 2055 in fig. 3) for spatial audio rendering based on metadata (based on object metadata 201b, other metadata 206a-207c, user head info 211, user position info 212 and environmental info 213 in fig. 3), wherein the metadata comprises at least a part of acoustic environment information (environmental info 213), listener spatial information (user position information 212, moved or static position, para 63), and sound source spatial information (object metadata 201b, or modified object metadata of para 56-58, containing azimuth angle, elevation angle, gain, etc., for each object in reconstructed audio scene, para 70, a location of a speaker 206f in the local speaker layout, para 56, etc.), and the parameter for spatial audio rendering indicates a characteristic of sound propagation in a scene (playing back the processed audio objects or channels through speakers or headphones to achieve 3D audio in 6DoF environment, e.g., by specifying a flag is6DoFMode, para 17-18, and practicing an early reflection and a direct sound paths from sound source to the listener, by environmental info, figs. 7a-7b, para 83-84) in which a listener is located (listener is defined and located by user position info 212);
processing an audio signal of a sound source (outputted from MPEG-H 3D audio core decoder 201, including channel-based, object-based, and scenebased HOA signal, from a bitstream, as sound source in the decoded signal 201a, etc., in fig. 3, para 54 and processed by rendering renderer 202 and binaural renderer 203 in fig. 3) based on the parameter for spatial audio rendering (based on the metadata or modified metadata and configuration info outputted from metadata and interface data processor, and based on paramL and paramR in fig. 3, para 59-60), so as to obtain output signal (OutL and OutR in fig. 3, para 67),
wherein the parameter for spatial audio rendering comprises a set of spatial impulse responses (601, 602, in fig. 5 and HRIR or distance compensated HRIR, para 66) and a reverb duration (603 in fig. 5), and the set of spatial impulse responses comprising a spatial impulse response for a direct sound path (direct path 601 in fig. 5) and a spatial impulse response for an early reflection sound path (early reflection path 602 in fig. 5).
However, Lee does not explicitly teach so as to obtain an encoded audio signal; and
performing spatial decoding on the encoded audio signal, so as to obtain a decoded audio signal and does not explicitly teach wherein the processing the audio signal of the sound source based on the parameter for spatial audio rendering, so as to obtain the encoded audio signal comprises: performing spatial audio encoding directly on the audio signal of the sound source using the spatial impulse response for the direct sound path and the spatial impulse response for the early reflection sound path, respectively, so as to obtain a spatial audio encoded signal of the direct sound and a spatial audio encoded signal of early reflection sound, respectively.
Family teaches an analogous field of endeavor by disclosing a method for spatial audio rendering (title and abstract, ln 1-13 and a method in fig. 8, 11 and implemented in a spatial audio system in figs. 1A-1C) and wherein processing an audio signal of a sound source (sound information having channel-based, objects, soundfield, etc., in fig. 11, para 257) based on a parameter (including telemetry data associated with a user, e.g., position of sound objects in 6DoF domain, para 255, and a map used for mapping of an input sound source to virtual sound sources, e.g., figs. 12A-12B, etc., para 254) for spatial audio rendering (spatial rendering of the audio to a sweet spot of a listener, para 231), so as to obtain an encoded audio signal (e.g., output from system encoder 1112 or output from cell encoder 1152-1 … 1152-n in fig. 11); and performing spatial decoding on the encoded audio signal (via a system decoder 1132 or a cell decoder 1172-1 …, 1172-n in fig. 11), so as to obtain a decoded audio signal (per-speaker n-channel output 1142 from the system decoder 1132 or transducer information 11812 from the cell decoder 1172-1 … 1172-n in fig. 11) for benefits of improving a perception quality of rendered sound fields (e.g., through a feedback regarding placement and/or orientation of cells within a particular space in a mobile application, para 222) in a flexible manner (many different layouts for the cells, para 223, and adapted to the number and configuration of the cells and/or loudspeakers, para 247) and improving a performance of the spatial rendering system (being capable of creating any number of audio objects based on the number of channels used to encode the audio source, para 230, 247).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied so as to obtain the encoded audio signal; and performing the spatial decoding on the encoded audio signal, so as to obtain the decoded audio signal, as taught by Family, to processing the audio signal of the sound source based on the parameter for the spatial audio rendering, so as to obtain output signal in the method, as taught by Lee, for the benefits discussed above.
However, the combination of Lee and family does not explicitly teach wherein the processing the audio signal of the sound source based on the parameter for spatial audio rendering, so as to obtain the encoded audio signal comprises: performing spatial audio encoding directly on the audio signal of the sound source using the spatial impulse response for the direct sound path and the spatial impulse response for the early reflection sound path, respectively, so as to obtain a spatial audio encoded signal of the direct sound and a spatial audio encoded signal of early reflection sound, respectively.
Jot teaches an analogous field of endeavor by disclosing a method for spatial audio rendering (title and abstract, ln 1-13 and method implemented in a system in figs. 2-3) and wherein each so as to obtain an encoded audio signal (through a main bus via pan direct 46d and encode matrix in figs. 3-4, col 3, ln 31-35); and performing spatial decoding on the encoded audio signal (5.0 and LCRS formats outputted from output decoder by taking the signal outputted from in figs. 3-4), so as to obtain a decoded audio signal (outputting from the output decoder in figs. 3-4) and wherein the processing the audio signal of the sound source based on the parameter for spatial audio rendering is disclosed (one or two source channel signals are processed with parameters represented by delays, dopller, filters 42d, 42e, 42r and attenuations 44d, 44e, 44r, etc., in fig. 3), so as to obtain the encoded audio signal (discussed above, outputted from the main bus in figs. 3-4) comprises: performing spatial audio encoding directly on the audio signal of the sound source (input to the delay/doppler 40d in fig. 3 and pitch-shifter doppler 40d in fig. 4) using the spatial impulse response for the direct sound path (processed by using path 40d, 42d, 44d in figs. 3-4, by using parameters representing the DIRECT path as part of room impulse response in figs. 1-2 and the encoding is adapted with the pan direct 46d to the movement of the listener 10 with sound sources S1, S2, and S3 in fig. 2) and the spatial impulse response (input to the delay 40e, same as the input to the delay/doppler 40d in fig. 3 and 62 in fig. 4) for the early reflection sound path (processed by using path 40e, 42e, 44e in figs. 3-4, by using parameters representing the room REFLECTION path as other part of room impulse response in figs. 1-2 and the encoding is adapted with the pan direct 46e to the movement of the listener 10 with sound sources S1, S2, and S3 in fig. 2), respectively, so as to obtain a spatial audio encoded signal of the direct sound and a spatial audio encoded signal of early reflection sound, respectively (inputs to the main bus from the PAN DIRECT 46d and PAN EARLY 46e to the main bus in fig. 3, col 3, ln 16-30) for benefits of improving the performance of audio sound reproduction (by differently distributing contributions of direct and reflection adapted to movement of the listener, col 4, ln 19-26 and col 54-56) in a cost saving manner (col 1, ln 62-65).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the processing the audio signal of the sound source based on the parameter for spatial audio rendering, so as to obtain the encoded audio signal comprises: performing the spatial audio encoding directly on the audio signal of the sound source using the spatial impulse response for the direct sound path and the spatial impulse response for the early reflection sound path, respectively, so as to obtain the spatial audio encoded signal of the direct sound and the spatial audio encoded signal of early reflection sound, respectively, as taught by Jot, to the processing of the audio signal of the sound source based on the parameter for spatial audio rendering, so as to obtain the encoded audio signal comprises, as taught by Jot, for the benefits discussed above.
Claim 13 has been analyzed and rejected according to claim 1 above, and the combination of Lee, Family, and Jot further teaches an electronic device (Lee, an audio play apparatus in fig. 3 and Family, a spatial audio system in figs. 1A-1C or a hardware of cell in fig. 33 and Jot, the system in figs. 3-4), comprising: a memory (Lee, the computer readable media, para 128 and Family, a memory 3240 in cell 3200, para 336 and a processor system, para 28, memory 3318 in fig. 33); and a processor (Lee, MPEG-H 3D Audio Core Decoder 201, metadata processor 204, etc., in a computer, para 128 and Family, a processor 3314 in fig. 33) coupled to the memory (Lee, inherency for the computer, e.g., via internal bus and Family, the processor coupled with the memory 3313 through a bus in fig. 33, para 342), the processor being configured to perform, based on instructions stored in the memory, the steps of method of claim 1 (Lee, the computer running software or instructions stored in the memory and implement the method is inherency, para 128 and Family, instructions and code segments, program code, programs, software modules, executed by the processor 3314, para 343).
Claim 20 has been analyzed and rejected according to claim 1, 13 above.
Claim 2: the combination of Lee, Family, and Jot further teaches, according to claim 1 above, wherein the determining the parameter for spatial audio rendering (Lee, the discussion in claim 1 above and Family, discussed in claim 1 above) comprises:
estimating, based on the acoustic environment information (Lee, including room characteristics info 110 in fig. 1 and Family, including room layout in figs. 6A-6B, 7A-7B, 8A-8D), a scene model approximate to the scene where the listener is located (Lee, environmental modeling 1061 by using user position info 109 and room characteristics info 110 and head-related impulse response HRIR modeling 1062 by using user’s head info 111 in fig. 1 and Family, scene detection by visual detection, para 352, or scene created and rendered as sound field, para 212 and Jot, a sound cone in fig. 6 and formed in the room of fig. 2); and
calculating the parameter for spatial audio rendering based on at least a part of the estimated scene model (Lee, via a synthesizing 1063 in fig. 1 and Family, discussed in claim 1 above), the listener spatial information (Lee, user position information 212, moved or static position, para 63 and Family, listener’s location as a sweet spot, e.g., listener’s position is around by a circle and on the surface of a sphere, para 198), and the sound source spatial information (Lee, object metadata 201b, or modified object metadata of para 56-58, containing azimuth angle, elevation angle, gain, etc., for each object in reconstructed audio scene, para 70, a location of a speaker 206f in the local speaker layout, para 56, etc. and Family, spatial layout metadata 914 provided to cells for rendering audio signals corresponding to specific audio objects, para 236).
Claim 5: the combination of Lee, Family, and Jot further teaches, according to claim 1 above, wherein the reverb duration is calculated based on the estimated scene model (Lee, the synthesized RIR 2052a obtained through an adder 20523 in fig. 6, and the model includes a late reverberation modeling 20522 that generated the decay time r2 in fig. 5, as claimed reverb duration, para 77 and Family, a way used to control a ratio of the direct to reverberant sound, para 204, and by part of the spatial audio system, para 231 and Jot, reverb time related to the distance of the source-listener in fig. 7, col 8, ln 23-29).
Claim 6: the combination of Lee and Family further teaches, according to claim 1 above, wherein the set of spatial impulse responses is calculated based on the estimated scene model (Lee, through early reflection modeling 20521 and late reverberation modeling 20522 in fig. 6 and Family, the RIR generated by cells, para 202), the listener spatial information, and the sound source spatial information (Lee, by using user position info, and speaker layout info containing azimuth angle, elevation angle, and a distance based on a view and position where a user is looking at a front side, para 70 and Family, based on the room acoustic characteristics, para 202 and Jot, including source-listener distance and angles in fig. 7, and listener direction and position, etc., col 7, ln 31-39).
Claim 7: the combination of Lee and Family further teaches, according to claim 1 above, wherein the encoded audio signal (Lee, the output from applying BRIR 1063a in fig. 2, and the BRIR containing three kinds of response characteristics in fig. 5, para 77 and Jot, output from the main bus in fig. 3) comprises: a spatial audio encoding signal of a direct sound (Lee, the portion generated by applying direct portion r1 601 of the BRIR on the final channel signal 103a in fig. 5, para 77 and Family, encoded signal from the system encoder 1112, or from cell encoder 1152-1, …, 1152-n in fig. 11, including channels and objects as representation of direct sound and Jot, outputted from the PAN DIRECT 46D in fig. 3), and/or a mix signal, wherein the mix signal comprises a spatial audio encoding signal of late reverb and/or a spatial audio encoding signal of an early reflection sound (Lee, the portions generated by applying early reflection r2 602 and applying the late reverberation r2 603 on the final channel signal 103a in fig. 1, para 77 and Family, upmixing and/or downmixing of channels of an audio source for rendering a number of audio objects, para 203, and including mixed ambisonics, para 225, and diffuse channels are generated by upmixing, para 233 and Jot, outputted from encode matrix in fig. 3).
Claim 9: the combination of Lee, Family, and Jot further teaches, according to claim 7 above, wherein the mix signal is obtained by: determining, based on the reverb duration and the audio signal of the sound source, the mix signal (Lee, the reverb duration r3 as the portion of BRIR in fig. 5 and applied to the final channel signal 103a through the binaural renderer 104 to generate Left 104a and Right 104b containing reverb portion and the early reflection portion due to BRIR 1063a in fig. 1, para 41 and Family, upmixing to create diffuse channels, para 233 and Jot, the results related to the source-listener distance in fig. 7 and discussion in claim 7 above).
Claim 10: the combination of Lee, Family, and Jot further teaches, according to claim 9 above, wherein the determining, based on the reverb duration (Lee, r3 of BRIR or HRIR in fig. 5 and from HRIR modeling 2051 in fig. 3 and Jot, the reverb time with the source and listener distance in fig. 7) and the audio signal of the sound source (Lee, represented by r in fig. 8 and as input audio signal to the reverberation generator in fig. 8 and Jot, fig. 7, col 8, ln 23-29 and the discussion in claim 9 above), the mix signal (Lee, applying the rlate on the final channel signal 103a to generate corresponding portion to the r2 reverberation portion in fig. 5 and fig. 8) comprises: determining, according to a distance between the listener (Lee, represented by compensated HRIR 2053a/2053b in fig. 3, for a distance or location change between the listener and the sound source, para 62) and the sound source (Lee, via distance compensation 2053 for adapting a change of distance due to a change of the listener’s position and thus, relative distance between the listener and the audio source for updated HRIR data 2053a, para 62-64) and the audio signal of the sound source (Lee, the signal r in fig. 8), a reverb input signal (Lee, FDN based algorithm in fig. 8, and through the adders of the output signal from matrix A as portion of the reverb input signal until achieving a length r3 603 in fig. 5 and obtaining rlate in fig. 8 and Jot, fig. 7 and col 8, ln 23-29 and discussed above); and performing, based on the reverb duration, artificial reverb processing on the reverb input signal to obtain the mix signal (Lee, through the reverb modeling in fig. 8 and obtaining the rlate signals and Jot, the mixed signal from the main bus in fig. 3).
Claim 11: the combination of Lee, Family, and Jot further teaches, according to claim 7 above, wherein the mix signal is obtained by: performing, based on the reverb duration, artificial reverb processing on the spatial audio encoded signal of the early reflection sound (Jot, through REVERB AND REVERB BUS in reverberation block 52 in fig. 4, col 3, ln 31-52), to obtain the mix signal mixed by the spatial audio encoded signal of the early reflection sound and the spatial audio encoding signal of the late reverb (Lee, through the early portion at the first number of loops in fig. 8 and further loops through matrix A inherently become late reverberation portion from continuously delay of the early reflection portion in fig. 8 and Family, the ratio of direct sound component to the reverberant or diffuse component is controlled within cells, para 204, 224 and Jot, input to the main bus in fig. 4).
Claim 14 has been analyzed and rejected according to claim 13, 2 above.
Claim 15 has been analyzed and rejected according to claim 14, 3 above.
Claim 16 has been analyzed and rejected according to claim 15, 4 above.
Claim 17 has been analyzed and rejected according to claim 15, 5 above.
Claim 18 has been analyzed and rejected according to claim 15, 6 above.
Claim 19 has been analyzed and rejected according to claim 15, 7 above.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Lee (above) and in view of references of Family (above), Jot (above), and further Schissler et al. (US 9940922 B1, hereinafter Schissler).
Claim 12: the combination of Lee, Family and Jot further teaches a smart device (Lee, a computer, para 128 or smart devices, para 2, or MPEG-H 3D audio encoding/decoding device, para 3, and Family, an apparatus 3300 in fig. 33, para 338 and applied to mobile device, and other consumer electronic device, para 211 and Jot, the system in figs. 3-4), comprising: at least one processor (Lee, the processor in a computer is inherency, including decoder, renderer 202, metadata and interface processor 204, etc., para 128 and Family, the processor 3314 in fig. 11) and an interface (Lee, interface to the media recording a program, para 128 and Family, the buss 3322 as the interface in fig. 33), the interface being configured to provide computer-executable instructions to the at least one processor, the at least one processor being configured to perform the computer-executable instructions to implement the method for spatial audio rendering according to claim 1 (Lee, interface between the processors and the media is inherency for the computer, e.g., internal bus, etc. and Family, the bus to couple the processor 3314 to the memory 3318).
However, the combination of Lee, Family, and Jot does not explicitly teach a chip that comprising a processor for implementing the method of claim 1.
Schissler teaches an analogous field of endeavor by disclosing a method for spatial audio rendering (title and abstract, ln 1-15 and a method in fig. 8 and implemented in a rendering device 100) and wherein the method can be implemented by a chip having processors (the method practiced on a 4-core Intel i7 4770k CPU or for Google Pixel XLTM phone with a 2+2 core snapdragon 821 chipset, col 16, ln 36-40) and an interface (chip connected to chip memory, col 1, ln 58-63) for benefits of efficiency of 3D audio rendering (9-15 times improvement compared to other approaches, col 7, ln 38-43, optimized by using Intel CPUs, col 16, ln 49-58) with less power consumed (using less thread to render the audio, col 17, ln 12-14).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the chip having the processor and the interface, as taught by Schissler, to the device for implementing the method steps of claim 1, as taught by the combination of Lee, Family, and Jot, for the benefits discussed above.
Allowable Subject Matter
Claim 11 is objected to as being dependent upon a rejected base claims 1, 7, but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims.
Examiner Comments
Claim 11 is objected rejected under 35 USC 112(b), as set forth above, for indefiniteness issues, which causes unclear in scope feature by feature and thus, in the claimed invention. Therefore, at this point, a prior art rejection for the claims would not be considered proper, as it would have to be based on mere assumptions and considerable speculation about the scope of the claims, see MPEP 2173.06. It is noted that the rejection under 35 USC 112(b) without application of prior art rejection is not an indication that the instantly amendment claims are patentable and it is also noted that, as best understood in view of the claim rejection under 35 USC 112(b), a prior art search has been updated by the examiner, and recorded in attached PTO-892 form.
Response to Arguments
Applicant's arguments in Remarks filed on February 5, 2026 have been fully considered and but are moot in view of the new ground(s) of rejection necessitated by the applicant amendment. Although a new ground of rejection has been used to address additional limitations that have been added to claims 1, 5-7, 11, 13, 17-19, 20, a response is considered necessary for several of applicant’s arguments since references Lee and Family will continue to be used to meet several claimed limitations applicant argued as below.
With respect to the prior art rejection of independent claim 1, similar to independent claims 12-13, and 20 under 35 USC §103(a), as set forth in the Office Action, applicant argued features that are not recited in independent claims 1, 12-13, 20 such as “The present application is directed to … the core … is to encode and decode original sound source signals for spatial rendering (paragraph 3 of page 8 in the Remarks)”, “original audio signals from sound sources, i.e., initial unencoded signals (paragraph 1 of page 9 in the Remarks)”, “The processing flow of the present application is … encoding …, and executing spatial decoding … (paragraph 2 of page 9 in the Remarks)“, “The objective of the present application is to construct sound characteristics matching a virtual environment from the ground up, with its core logic being: first, determining … rendering parameters based on metadata …, separate encoding … (paragraph 3 of page 9 in the Remarks)”, “Lee does not teach the separate extraction and utilization of … (paragraph 4 of page 10 in the Remarks)”, etc. and then applicant pointed differences between Lee’s disclosure and unclaimed features above.
In response to the argument above, the Office respectfully disagrees because
(1) claims failed to recite the argued features such as “core” and there is no “encode and decode original sound source signals for spatial rendering”, there is no recitation about what “source sources” is and no recitation of about explanation of the “original sound source signals” is about “initial unencoded signals”, and about the flow of argued “encoding …, and executing spatial decoding”, and there is no any recited features that are about “virtual environment from the ground up, with its core logic, …” in claims, and there is no recitation about “separate extraction …” in claims also, etc. Again, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145 and therefore, the argued features above, which may be disclosed in the application specification, shall not be read back into claims.
(2) for the argued some features broadly recited in claims, such as “rendering”, “sound source”, “encoding … decoding …”, etc., above, because the claims broadly recited “method for spatial audio for rendering” with broadly recited “sound source”, “encoding … decoding …”, etc., as argued above, their BRI would be applied, see MPEP 2111. For example, the broadly recited “rendering” possibly interpreted as “rendering” and “environment information” at either encoder side, decoder side, or other situations, the broadly claimed “sound source” possibly interpreted to an entity that outputs “audio signal”, etc., either encoding side or decoding side, or speakers/loudspeakers at the rendering or decoder side, etc. based on their BRI, and another example is about encoding and decoding, because claim failed to recite how “encoding” is performed, its BRI is applied based on knowledge and skill in the art, such as converting information into another form, etc. and
(3) because of BRI above, mapping from the claimed features that are subject to their BRI and the prior art disclosure should be performed whether claimed features under their BRI are anticipated or not, other than argued whether they are “differences” or not. For example, the argued and claimed “to obtain an encoded audio signal” with no recitation of how “encoding” is performed and then “performing spatial decoding on the encoded audio signal”, etc., is anticipated by prior art Family disclosure (multiple encoding and decoding pairs, and metadata with sound field and sound source, etc., at encoder side and at re-encoder side of the pairs in fig. 11 and discussed in the office action), etc., i.e., it would be obvious for one having ordinary skill in the art that features processed in decoder side could be processed in encoder side and verse, but applicant is in silence.
In the response to this office action, the Office respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached Monday-Friday 6:30amp-4:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LESHUI ZHANG/
Primary Examiner,
Art Unit 2695