DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the claim amendment filed on December 19, 2026 and wherein claims 1, 3-17, 18-20 amended and claim 2 cancelled.
In virtue of this communication, claims 1, 3-20 are currently pending in this Office Action.
With respect to the objection of drawings due to formality issues, as set forth in the previous Office Action, applicant argument, see paragraphs 3-7 of page 8 and paragraphs 1-2 of page 9 in Remarks filed on December 19, 2026, have been fully considered and the argument is persuasive. Therefore, the objection of drawings due to the formality issues, as set forth in the previous Office Action, has been withdrawn.
With respect to the objection of claims 2-5 due to formality issues, as set forth in the previous Office Action, the claim amendment has been fully considered. Therefore, the objection of claims 2-5 due to the formality issues, as set forth in the previous Office Action, has been withdrawn.
With respect to the rejection of claims 1-20 under 35 USC §101, as set forth in the previous Office Action, the Applicant’s amendment, including the cancelation of claim 2, and argument, see paragraphs 6-8 of page 9 and paragraphs 1-6 of page 10 in Remarks filed on December 19, 2026, have been fully considered and the argument found persuasive. Therefore, the rejection of claims 1-20 under 35 USC §101, as set forth in the previous Office Action, has been withdrawn.
With respect to the rejection of claims 1-20 under 35 USC §112(b), as set forth in the previous Office Action, the claim amendment, including the cancelation of claim 2, and argument, see paragraphs 3-5 of page 11 in Remarks filed on December 19, 2026, have been fully considered and the argument is persuasive. Therefore, the rejection of claims 1-20 under 35 USC § 112(b), as set forth in the previous Office Action, has been withdrawn.
The Office appreciates the explanation of the amendment and analyses of the prior arts, and however, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145.
In the response to this office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuji et al (US 20200327879 A1, hereinafter Tsuji) and in view of reference Audfray et al. (US 20190387350 A1, hereinafter Audfray).
Claim 1: Tsuji teaches an audio signal rendering method (title and abstract, ln 1-8, method steps in fig. 5), comprising:
retrieving a reverberation time of an audio signal (a reverberation time, a pre-delay delay time, etc., in reverb parameter in fig. 2, and retrieved from demultiplexer 21 in fig. 1) at each of a plurality of time points (including at points of reverberation time, pre-delay delay time, early reflection delay time, etc., in fig. 2, para 87-88) of the audio signal (as metadata of the audio object, para 73-74); and
rendering the audio signal based on the reverberation time of the audio signal (rendering the object audio data via reverb processing unit 22 and VBAP processing unit 23, and based on the reverb parameter discussed above),
wherein the rendering the audio signal comprises:
generating a reverberation for the audio signal based on the reverberation time, wherein the reverberation is added to a bitstream of the audio signal (adding a component of reverberation sound to original object audio data by the VBAP processing, para 80).
However, Tsuji does not explicitly teach wherein the generating the reverberation is based on a type of an acoustic environment model, wherein the type of the acoustic environment model comprises physical reverberation, artificial reverberation, and sample reverberation.
Audfray teaches an analogous field of endeavor by disclosing an audio signal rendering method (title and abstract, ln 1-21, rendering through headphones in a processing module 1020 in fig. 1, and method in fig. 7-8, 10), and wherein Audfray teaches:
retrieving a reverberation time of an audio signal (measured reverberations of the direct sound 630 in fig. 6, reflection 620 of the direct sound 630, and direct 610 in fig. 6, para 40; e.g., a piano as sound source perceived by a listener in with direct sound, reflection, and reverberation in fig. 5) at each of a plurality of time points (time points referred to the origin in time line in fig. 6 and with reflection delay 622, reverb_delay 632, in fig. 6) of the audio signal (sound source signal or direct sound signal at a distance between the sound source and the listener in figs. 5-6, para 40); and
rendering the audio signal (through DSP, Audio Spatializer 422 in fig. 4, para 32-33 and realized by audio mixing architecture in fig. 8, para 52 and room processing module 950 detailed in 1200 in fig. 12, para 80) based on the reverberation time of the audio signal (at least based on reverberation global delay 1232 to control a parameter Drev in fig. 17, para 80),
wherein the rendering the audio signal comprises:
generating a reverberation for the audio signal based on the reverberation time (through the room processing module 1200 in fig. 12, representing room processing 950 in fig. 9), wherein the reverberation is added to a bitstream of the audio signal (adding the object signals with the reverberation signal through main mix bus 940 via the per-source processing 920 detailed in element 1020 in fig. 10 and room processing 950 detailed in element 120 in fig. 12),
the generating the reverberation for the audio signal comprises:
generating the reverberation based on a type of an acoustic environment model (a model including any suitable type of environment geometrical room representation 500 in fig. 5 and described by direct sound 502, reflection 504, and reverberation 506, para 39), wherein the type of the acoustic environment model comprises physical reverberation (reverberation 506 in fig. 5), artificial reverberation (blended physical environment heard naturally by the listener with binaural artificial reverberation processing to match local environment acoustics, para 46), and sample reverberation (sample reverberation in concert hall of fig. 5, para 39 and sampled by a time line in fig. 17, and through the reflections send bus and a main mix bus, para 79) for benefits of enhancing listener’s perception experiences of sound in different environment (by fitting the artificial simulation to realize physical acoustic properties, para 5, by realizing and enhancing believability and realism of the virtual sound by using the relative position and orientation of the listener, para 32, and providing accuracy and timely information in a movable environment, para 29-30).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied wherein the generating the reverberation is based on the type of the acoustic environment model, wherein the type of the acoustic environment model comprises the physical reverberation, the artificial reverberation, and the sample reverberation, as taught by Audfray, to the generating of the reverberation for the audio signal comprised in the rendering the audio signal in the audio signal rendering method,
as taught by Tsuji, for the benefits discussed above.
Claim 18: the combination of Tsuji and Audfray further teaches, a chip (Tsuji, a signal processing device, in fig. 1 and Audfray, wearable head device in fig. 1, para 26), comprising: at least one processor (Tsuji, a central processing unit CPU, para 359, and Audfray, processor 416 in fig. 4, GPU and/or DSP, para 32), and an interface for providing computer executable instructions to the at least one processor (Tsuji, removable recording medium 511 with program codes, para 363 and memory/storage with instructions 916 in fig. 9 and buses in figs. 8-9 and connections in fig. 4), wherein the at least one processor is used to execute the computer executable instructions to implement an audio signal rendering method of claim 1 (Tsuji, programs executed by a general-purpose personal computer, para 357, and Audfray, software, para 36 and executed by the processor, para 36).
Claim 19 has been analyzed and rejected according to claims 1, 18 above.
Claim 20 has been analyzed and rejected according to claims 1, 18-19 above.
Claim 3: the combination of Tsuji and Audfray further teaches, according to claim 1 above, wherein the generating the reverberation for the audio signal comprises: generating the reverberation based on an estimated late reverberation gain (Tsuji, through Wet gain in the fig. 2, para 89, and Jot, e.g., cubic volume of a room to late reverberation or reverberation decay time in table 1, para 41-42 and the reverberation signal is scaled, as an estimated late reverberation gain, according to the room volume of the listener’s environment as local environment and room volumes of reference, para 39 and Audfray, through reverb global gain 1230, reflections global gain 1220 in fig. 12).
Claims 4-17 are rejected under 35 U.S.C. 103 as being unpatentable over Tsuji (above) and in view of references Audfray (above) and Jot et al (US 20170223478 A1, hereinafter Jot).
Claim 4: the combination of Tsuji and Audfray further teaches, according to claim 2 above, wherein the estimating the reverberation time of the audio signal comprises:
constructing a model of an objective function (Audfray, a model in fig. 17, para 79 and gain with distance in fig. 15) based on a decay curve of the audio signal (Audfray, at least based on direct signal curve Direct in fig. 15), a parametric function of a fitted curve of the decay curve (curve represented by parameters Dtof, Dobj, Dtotal, with gain offset Loo, Lgo, and Lto, para 79-84), and weights corresponding to a plurality of historical time points (Audfray, gain offsets Loo, Lgo, Lto, etc., with the time offsets Dtof, Dobj, Dtotal, etc., in fig. 17), wherein the weights vary with time (gain offsets Loo, Lgo, Lto, etc., with the time offsets Dtof, Dobj, Dtotal, etc., in fig. 17);
solving the objective function with a parameter of the parametric function of the fitted curve as a variable (represented by the equations 10-13, para 81-84); and estimating a reverberation time of the audio signal based on the fitted curve (the decay_time is determined based on the reflections_Delay 622 and reverb-delay 632 in fig. 6, and with the gain offsets with the time offset in fig. 17 and equations 10-13, para 81-84).
However, the combination of Tsuji and Audfray does not explicitly teach an objective of minimizing the model of the objective function to determine the fitted curve of the decay curve.
Jot teaches an analogous field of endeavor by disclosing an audio signal rendering method (title and abstract, ln 1-15, rendering through headphones 150 in fig. 1, a method steps in figs. 7-8), and wherein estimating a reverberation time of an audio signal is disclosed (reverberation decay time and reverberation delay 204 in figs. 2, 6A, 6B and reverberation power is exponentially with time, para 40) to comprising:
constructing a model of an objective function (a difference in time line between curves 602 as reference decay envelope and 622 as local decay envelope in fig. 6C and further gain offset applied, para 76) based on a decay curve of the audio signal (the decay curve 622 in fig. 6C), a parametric function of a fitted curve of the decay curve (reverberation curves 402 fitted to the curve 401, as reference impulse response 601 in fig. 6, and fitted to different portions of the measured EDR 401 in fig. 4B, para 60, e.g.,
PNG
media_image1.png
19
173
media_image1.png
Greyscale
para 56 and linear curve fitting at frequencies used for providing an estimate of reference decay time Tr(f), para 56), and weights corresponding to a plurality of historical time points (a gain offset at various times and frequencies, para 76), wherein the weights vary with time (the gain offset determined upon magnitude difference between a decay envelop of the local reverberation decay time and a reference envelope of the reference impulse response, para 76 and the amount of the difference varies upon time in fig. 6C);
solving the objective function with a parameter of the parametric function of the fitted curve as a variable (e.g.,
PNG
media_image1.png
19
173
media_image1.png
Greyscale
etc., above) and an objective of minimizing the model of the objective function (the curve difference above is minimized for fitting and discussed above) to determine the fitted curve of the decay curve (the scaled reference time-frequency envelope of the reference impulse response is fitted to the local time-frequency envelope of the local impulse response, para 87, i.e., minimizing the different between the local and the reference above); and estimating a reverberation time of the audio signal based on the fitted curve (Tr(f) is determined to provide consistency between real or local and synthetic or gain offset-adjusted reference, para 55, through EDR (t,f) from a start point EDR’(0,f), para 56) for benefits of enhancing perception experiences of the listener (by approximating real sound source with virtual sound source, para 4, 6 and for providing a particular auditory experience via a virtualize, para 33).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied minimizing the model of the objective function to determine the fitted curve of the decay curve in the estimating the reverberation time of the audio signal, as taught by Jot, to the estimation of the reverberation time of the audio signal in the audio signal rendering method, as taught by the combination of Tsuji and Audfray, for the benefits discussed above.
Claim 5: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 4 above, wherein the constructing the model of the objective function comprises: constructing the model of the objective function based on differences between the decay curve and the parametric function of the fitted curve of the decay curve at the plurality of historical time points (Jot, from a start point EDR’(0,f) until about 350ms in fig. 6C), and the weights corresponding to the plurality of historical time points (Audfray, the gain offsets in fig. 17 and Jot, the gain offset varies along the time, para 76); or weighting the decay curve at the plurality of historical time points using the weights corresponding to the plurality of historical time points (Markush, MPEP 2117, Audfray, the gain offsets corresponding to time offsets in fig. 17 and discussed in claim 4 above, and Jot, applying the gain offset at various times and frequencies and discussed above, para 76); and constructing the model of the objective function based on differences between the weighted decay curve and the parametric function of the fitted curve at the plurality of historical time points (Jot, weighted difference through weighted reference decay curve above as the objective function discussed above).
Claim 6: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 4 above, wherein: a weight corresponding to a later time point is smaller than a weight corresponding to an earlier time point (Audfray Lref corresponding to Drev is smaller than Lrev corresponding to early at the end of time of direct sound arrival at Dtof and Drm or Dobj in fig. 17 and Jot, the gain offset varies along the time, para 76 and discussed above) or the weights corresponding to the plurality of historical time points are independent of the characteristic of the decay curve (Markush MPEP 2117, Audfray, the parameters Loo and Lgo with the time offsets Drm, Der, and independent each other in fig. 17 and the equations 10-13, para 81-84 and Jot, e.g., the envelope of the curve as the characteristic of the decay curve in figs. 4A/4B, 6A-6D).
Claim 7: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 5 above, wherein the constructing the model of the objective function (the discussion above, e.g., the difference by modified reference decay curve through the gain offset and the local decay curve, and the discussion above), and the weights corresponding to the plurality of historical time points are also disclosed (the gain offset applied to the reference decay curve, discussed above), and the differences using the weights corresponding to the plurality of historical time point (Jot, the weights determined upon the different between the reference decay curve and the local decay curve in time line and fitting the gain-offset weighted reference decay curve to the local decay curve, the discussed above), between the decay curve and the parametric function of the fitted curve at the plurality of historical time points using the weights corresponding to the plurality of historical time points (Jot, the discussion above, weighted by gain offset, para 76) except calculating a weighted sum of the differences using the weights; and constructing the model of the objective function based on the weighted sum of the differences.
An Official Notice, applied in the previous office action and being the admitted prior art because the applicant failed to traverse the Office’s assertion of the office notice, is retaken that calculating a weighted sum of differences using weights between datasets at a plurality of points and constructing a model of the object function based on the weight sum of the differences, is notoriously well-known in the art, e.g., mean squared error MMSE and least squared error LSE, etc., for benefits of efficiently optimizing the object function by minimizing the error in a simpler computation approach and costless algorithm.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have applied calculating the weighted sum of differences using the weights between the datasets at the plurality of points and constructing the model of the object function based on the weight sum of the differences, as taught by well-known in the art, to the weights and the difference between the weighted reference decay curve and the local decay curve for fitting the reference to the local curves in the audio signal rendering method, as taught by the combination of Tsuji and Jot, for the benefits discussed above.
Claim 8: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 7 above, wherein the calculating the weighted sum of the differences between the decay curve and the parametric function of the fitted curve at the plurality of historical time points using the weights corresponding to the plurality of historical time points comprises:
calculating the weighted sum of variances or standard deviations between the decay curve and the parametric function of the fitted curve at the plurality of historical time points using the weights corresponding to the plurality of historical time points (Audfray, the discussion in claim 7 above, and Jot, the difference between the weighted reference decay curve and the local decay curve for fitting, discussed above, and the error in MMSE and LSE above being the standard deviations between the datasets in the dataset sequence).
Claim 9: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 5 above, wherein the constructing the model of the objective function comprises: calculating a sum of differences between the weighted decay curve and the parametric function of the fitted curve at the plurality of historical time points to construct the model of the objective function; or constructing the model of the objective function based on variances or standard deviations between the weighted decay curve and the parametric function of the fitted curve at the plurality of historical time points (Audfray, the discussion in claim 7 above and Jot, the sum of differences between the weight decay cure or variances or standard deviations between, etc., the discussion in claims 7-8).
Claim 10: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 9 above, wherein the constructing the model of the objective function comprises: calculating a sum of the variances or standard deviations between the weighted decay curve and the parametric function of the fitted curve at the plurality of historical time points to construct the model of the objective function (Audfray, the discussion in claim 7 above and Jot, e.g., the error in MMSE and LSE, etc. the discussion in claims 8-9 above).
Claim 11: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 4 above, wherein the constructing the model of the objective function comprises: determining weights corresponding to the plurality of historical time points based on a statistical characteristic of the parametric function of the decay curve; and constructing the model of the objective function based on the weights corresponding to the plurality of historical time points; or determining the weights corresponding to the plurality of historical time points based on a characteristic of the sound signal; and constructing the model of the objective function based on the weights corresponding to the plurality of historical time points (Audfray, the gain offsets corresponding to time offsets, respectively in fig. 17, and equations 10-13, para 80-84, and Jot’s gain offset varies in time line according to the statistic characteristic of the reference decay curve fitted to the local decay curve that is also characteristic of the audio signal and the gain offset is upon the difference between the both, and the discussion in claims 4-5 above and the error in MMSE and LSE algorithm is mean value as statistic characteristic).
Claim 12: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 11 above, wherein the constructing determining the weights of the plurality of historical time points comprises: determining the weights of the plurality of historical time points based on a minimum value and average value of the parametric function of the decay curve and values of the parametric function of the decay curve at the plurality of historical time points (the gain offsets are gain difference related to Lref and Lrev corresponding to time offsets Dobj, Dtotal, etc., in fig. 17, and Jot, the different between both with the gain offset applied to the reference decay curve, the discussion in claim 4-5 above and claims 8-9 above, and MMSE and LSE by adjusting weight to minimizing the error in MMSE and LSE, inherently).
Claim 13: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 12 above, wherein the constructing determining the weights of the plurality of historical time points comprises: determining the weights of the plurality of historical time points based on differences between the function values of the parametric function of the decay curve at the plurality of historical time points (based on the equations 10-13, para 80-84 and indicated in fig. 17) and the minimum value of the parametric function of the decay curve, and a sum of the minimum value of the parametric function of the decay curve and the average value of the parametric function of the decay curve, the weights of the plurality of historical time points being positively correlated with the differences and negatively correlated with the sum (Jot, the discussion in claims 4-5 and 6-9, e.g., reference decay curve measured at a weak source and below the local decay curve and lowest value of dB in time line can be obtained as reference decay curve, i.e., due to the similar envelop to the maximum value of the decay curve, e.g., in Jot’s figs. 6A-6C).
Claim 14: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 13 above, wherein the determining the weights of the plurality of historical time points comprises: determining the weights of the plurality of historical time points based on ratios of the differences to the sum at the plurality of historical time points (Audfray, the discussion in claim 13 above and Jot, normalization of value, e.g., the normalization difference, is well-known in the art, e.g., in MMSE and LSE and the discussion in claims 6-8 above).
Claim 15: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 4 above, wherein the parametric function of the fitted curve is a linear function with time as a variable (Audfray, the linear curve in figs. 6 and 17 and Jot, linear curve fitting at multiple difference frequencies, para 56 and also linear lines 402 in time line for different section, para 59), and the estimating the reverberation time of the audio signal based on the fitted curve comprises: determining the reverberation time based on a slope coefficient of the linear function; or wherein the decay curve is determined based on a Room Impulse Response RIR of the audio signal (Audfred, the slop defined by the time offsets and gain offsets in fig. 17, and Jot, 402 as linear fitting to reference decay curve 401 in fig. 4B and reverberation curve as linear fitting in fig. 5B, para 62).
Claim 16: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 15 above, wherein the reverberation time is proportional to a reciprocal of the slope coefficient of the linear function (Audfray, the decay length is proportional to the reciprocal of the slop in fig. 17 and Jot, fig. 5B, the slop of linear line from time line 0.0 to 0.2 second, being a smaller reverberation time 0.2 second, has a relative larger slope value than the linear line from time 0 to over 0.3 second, but greater reverberation time 0.3 second in fig. 5B).
Claim 17: the combination of Tsuji, Audfray, and Jot further teaches, according to claim 15 above, wherein: the determining the reverberation time based on the slope coefficient of the linear function comprises: determining the reverberation time based on the slope coefficient of the linear function and a preset reverberation energy decay value (Audfray, the discussion in claim 15 above and the slop is defined by the gain offsets corresponding to the time offsets in fig. 17 and equations 10-13, para 80-84 and Jot, reference decay curve 401 in figs. 4A/4B, or 501 in figs. 5A/5B); or the determining the fitted curve of the decay curve comprises: determining a first extremum equation based on a partial derivative of the objective function with respect to the slope coefficient of the linear function; determining a second extremum equation based on a partial derivative of the objective function with respect to an intercept coefficient of the linear function; and solving the first extremum equation and the second extremum equation to determine the slope coefficient of the parametric function of the fitted curve (Markush MPEP2117, Audfray, the discussion above and in claim 15 above, Jot, performing linear fitting and extrapolation of a portion of the measured EDR 401, para 59, and thus, multiple linear fitting portions in figs. 4A/4B, including the linear in time line in fig. 4B and for each of portions applied with MMSE or LSE, as discussed in claim 7-8 above).
Response to Arguments
Applicant's arguments filed on December 19, 2026 have been fully considered and but are moot in view of the new ground(s) of rejection necessitated by the applicant amendment. Although a new ground of rejection has been used to address additional limitations that have been added to claims 1, 18-20, a response is considered necessary for several of applicant’s arguments since reference Jot will continue to be used to meet several claimed limitations.
With respect to the prior art rejection of independent claim 1, similar to claims 18-20, under 35 USC §103(a), as set forth in the Office Action, it appears that applicant challenged prior art Jot with the claimed feature “reverberation time” estimated “at each of a plurality of time points of the audio signal” and argued that Jot does not teach this feature because Jot’s “reverberation decay time” be estimated is not claimed “reverberation time at each time point of the audio signal that needs to be rendered”, and they are “not the same”, and also Jot’s “signal is processed in the frequency domain …, but not on the time of the audio signal” and “Thus, the reverberation decay time in Jot is not equivalent to the reverberation time estimated in amended Claim 1”, as asserted in paragraphs 4-9 of page 12 and paragraphs 1-2 of page 13 in Remarks filed on December 19, 2026.
In response to the argument cited above, the Office respectfully disagrees because (1) claims broadly recited a way to “estimating a reverberation time of an audio signal at each …” with no recitation of what “reverberation time” is, and this, BRI would be reasonably applied, e.g., including early reverberation time, late reverberation time, etc., as it is well-known in the art, (2) because claim broadly recited “a reverberation time” and its BRI applied, Jot clearly disclosed “reverberation delay (204)”, “late reverberation time (205)”, etc., and measured at different time clock (fig. 2, in time line) in time-domain or time-frequency domain, which is further expanded to time-frequency domain that includes time-domain (fig. 4A/4B, 5A/5B, figs. 6A-6D) and thus, the Jot’s disclosure anticipated broadly claimed “reverberation time of an audio signal” estimated “at each time point”, but applicant is in silence and wherein an anticipation doesn’t have to be the same and prior art application is compliance with whether anticipation or not, other than different or “not same” and thus, the argument above is moot. However, in order to expedite the prosecution, prior art Audfray is applied to meet the claimed features as set forth in the office action above.
On the bases of above analyses and evidences from the prior art, the prior art rejection of independent claim 1, similar to claims 18-20, under 35 USC §103(a), as set forth in the Office Action, is proper. For the at least similar reasons discussed above, the prior art rejection of dependent claims 3-17 is also maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached on Monday-Friday 6:30amp-4:00pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached on 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LESHUI ZHANG/
Primary Examiner, Art Unit 2695