Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
INFORMATION DISCLOSURE STATEMENT
The information disclosure statement (IDS) submitted on 12/20/2023, 10/29/2024 & 07/30/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
NOTICE OF PRELIMINARY AMENDMENT
The Examiner acknowledges the amended claims filed on 12/20/2023.
- Claims 1-25 have been cancelled.
DOUBLE PATENTING
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 26-50 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over co pending application 17/485,406 (U.S. Publication 2022/0207359) in view of Xie et al. (U.S. Publication 2019/0370656). This is a provisional statutory double patenting rejection since the claims directed to the same invention have not in fact been patented.
As to claims 26, 32, 39 & 45, instant application discloses a computing system for image sequence or video analysis, comprising: a processor; and a memory coupled to the processor, the memory storing a neural network, the neural network comprising: a plurality of normalization layers arranged as a relay structure (Claim 2 – the previous hidden state and previous cell state are received from a previous hyper normalization layer included in the neural network), (Claim 6 – the hidden and the cell state are generated by relay logic in the hyper normalization layer).
17/485,406 (U.S. Publication 2022/0207359) is silent to a plurality of convolution layers; wherein each normalization layer is coupled to and following a respective one of the plurality of convolution layers.
However, Xie’s [0011-0015] discloses BR PRUNE, Batch normalization layer right before or right after convolution layers, modern DNNs contain multiple batch normalization layers, Right before or right after convolution layers, multiple batch normalization layers.
It would have been obvious to one of ordinary skill in the art at the time of filing to modify copending Application No. 17/485,406 (U.S. Publication 2022/0207359) to include the above limitations in order to normalize each convolution output while preserving the anchor’s relay of hidden/cell states across the normalization layers.
As to claims 27-31, 33-38, 40-44 & 46-50, these claims are rejected due to their dependence on claims 26, 32, 39 & 45 and are rejected for the same reasons.
CLAIM REJECTIONS - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 26-28, 32-34, 39-41 & 45-47 are rejected under 35 U.S.C. 103 as being unpatentable over Yao et al. (U.S. Publication 2022/0207359) in view of Xie et al. (U.S. Publication 2019/0370656)
As to claims 26, 32, 39 & 45, Yao discloses a computing system for image sequence or video analysis, comprising: a processor; and a memory coupled to the processor (3200, Fig. 32 & [0368] discloses the method 3200 may be performed by a compute engine, a graphics processing unit, a central processing unit during training or inference of the neural network.), the memory storing a neural network, the neural network (3200, Fig. 32 & [0368] discloses during training or inference of the neural network)(140, Fig. 1 & [0015]); and a plurality of normalization layers arranged as a relay structure (3202, Fig. 32 & [0369-0371] discloses generating a hidden state and a cell state as well as a previous hidden state and a previous cell state received from a previous layer. [0375] discloses relay logic in the hyper normalization layer)
wherein each normalization layer is coupled to and following a respective one of the plurality of convolution layers (3204, Fig. 32 & [0370, 0373-0374] discloses normalizing standardizing and affine transform using hidden/cell state).
Yao is silent to the memory storing a neural network, the neural network comprising: a plurality of convolution layers; wherein each normalization layer is coupled to and following a respective one of the plurality of convolution layers.
However, Xie discloses the memory storing a neural network, the neural network (a device may comprise a processor and a non-transitory memory electronically coupled to the processor, the memory comprising computer code for a deep neural network model. (Fig. 1 & [0007]). comprising: a plurality of convolution layers; wherein each normalization layer is coupled to and following a respective one of the plurality of convolution layers. (Fig. 1 & [0004]) discloses these batch normalization layers are usually put right before or after convolution layers. [0015] discloses prune the BN layer when this layer connects to (is right before or right after) any linear operation layer including convolution layers)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Yao’s disclosure to include the above limitations in order to normalize each convolutions output and feed stabilized, layer local statistics into the relay mechanism improving CNN accuracy and stability.
As to claim 27, Yao in view of Xie discloses everything as disclosed in claim 26. In addition, Yao discloses wherein the plurality of normalization layers arranged as a relay structure comprises, for each layer (k), a normalization layer for the layer (k) coupled to and following a normalization layer for a preceding layer (k-1). (Fig. 32 & [0369-0374] discloses generating a hidden state and a cell state as well as a previous hidden state and a previous cell state. The previous hidden/cell state may be received from a previous hyper normalization layer. The input feature map may be normalized, standardized and performing an affine transformation using the hidden/cell state.)(3204, Fig. 32 & [0370-0374] discloses structure of normalized then standardizing then affine transformation.)
As to claims 28, 34 & 47, Yao in view of Xie discloses everything as disclosed in claims 32, 39 & 45. In addition, Yao discloses wherein the normalization layer for the layer (k) is coupled to the normalization layer for the preceding layer (k-1) via a hidden state signal and a cell state signal, each of the hidden state signal and a cell state signal generated by the normalization layer for the preceding layer (k-1). (Fig. 32 & [0369-0374] discloses generating a hidden state and a cell state as well as a previous hidden state and a previous cell state. The previous hidden/cell state may be received from a previous hyper normalization layer. The input feature map may be normalized, standardized and performing an affine transformation using the hidden/cell state.)
As to claims 33, 40 & 46, Yao in view of Xie discloses everything as disclosed in claims 32, 39 & 45. In addition, Yao discloses wherein the plurality of normalization layers arranged as a relay structure comprises, for each layer (k), a normalization layer for the layer (k) coupled to and following a normalization layer for a preceding layer (k-1). (3202, Fig. 32 & [0371, 0375] discloses the previous hidden state and the previous cell state may be received from a previous hyper normalization layer included in the neural network.)
As to claim 41, Yao in view of Xie discloses everything as disclosed in claim 40. In addition, Yao discloses wherein the normalization layer for the layer (k) is to be coupled to the normalization layer for the preceding layer (k-1) via a hidden state signal and a cell state signal, each of the hidden state signal and a cell state signal to be generated by the normalization layer for the preceding layer (k- 1). (3202-3204, Fig. 32 & [0369-0374] discloses generating a hidden state and a cell state as well as a previous hidden state and a previous cell state. [0371] the previous hidden/cell state may be received from a previous hyper normalization layer. Also see input feature map may be normalized using the hidden state and the cell state.)
Claim 38 are rejected under 35 U.S.C. 103 as being unpatentable over Yao et al. (U.S. Publication 2022/0207359) in view of Xie et al. (U.S. Publication 2019/0370656) as applied in claim 32 above, further in view of DOORNBOS et al. (U.S. Publication 2015/0200302)
As to claim 38, Yao in view of Xie discloses everything as disclosed in claim 32. In addition, Yao discloses wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.
However, DOORNBOS discloses wherein the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.
(100, Fig. 1 & [0003, 0024] discloses the gate region is formed on a top surface and sidewalls of the fin such that it wraps and around the fin. The portion of the fin extending under the gate between the source region and the drain region is the channel region. The fin may further comprise a source and a drain separated by a channel region, the channel region of the fin being surrounded by a gate region on three sides. [0025] discloses a p-type punch through stopper below the channel region.)
It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Yao in view of Xie’s disclosure to include the above limitations in order to realize predictable CMOS device behavior (electrostatic control, leakage management, drive current) when fabricating the claimed logic in an integrated circuit.
CONCLUSION
No prior art has been found for claims 29-31, 35-37, 42-44 & 48-50 in their current form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stephen P Coleman whose telephone number is (571)270-5931. The examiner can normally be reached Monday-Thursday 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Stephen P. Coleman
Primary Examiner
Art Unit 2675
/STEPHEN P COLEMAN/Primary Examiner, Art Unit 2675