Prosecution Insights
Last updated: April 19, 2026
Application No. 17/796,329

INFORMATION PROCESSING CIRCUIT

Non-Final OA §102§103§112
Filed
Jul 29, 2022
Examiner
RIVERA, MARIA DE JESUS
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
4y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
10 granted / 15 resolved
+11.7% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
31 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
30.5%
-9.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is non-final and is in response to the claims filed July 29th, 2022. Claims 1-12 are pending, of which claims 1-12 are currently rejected. Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/29/2022 and 07/31/2023 is in compliance with the provisions of 37 CFR 1.97. It has been placed in the application file, and the information referred to therein has been considered as to the merits. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: Sum of Products Operations Calculated on Multiple Information Processing Circuits for Deep Learning. The disclosure is objected to because of the following informalities: [0013] line 1 “the reasoner” is not mentioned anywhere else in the specification, it is unclear what this reasoner being referred to is. [0020] line 2 “external memory 20” should be “external memory 202”. [0023] line 2 “the second information processing circuit 2” should be “the second information processing circuit 20”. [0029] line 1 “CP” should be “CPU” [0048] “When there is unprocessed data (NO in step S707…repeats steps S701 to S706” should be “When there is unprocessed data (YES in step S707) …repeats steps S701 to S706”. [0048] “When there is no more unprocessed data (YES in step S707) …terminates processes” should be “When there is no more unprocessed data (NO in step S707) …terminates processes”. Claim Objections Claims 6-7 and 10 and 12 are objected to because of the following informalities: Claim 6 line 7 “the integration circuit integrates calculation result” should be “the integration circuit integrates the calculation result”. Claim 7 is objected to based on its dependence upon claim 6. Claim 10 line 5 “by calculating a weighted sum accepted inputs” should be “by calculating a weighted sum of accepted inputs”. Claim 12 line 7 “by calculating a weighted sum accepted inputs” should be “by calculating a weighted sum of accepted inputs”. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-4 and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation “a calculation result” on line 4. It is unclear if this calculation result is one of the two calculation results of the first or second processing circuits or a combination of both or some other calculation result. Appropriate correction is required. For examination purposes, the “a calculation result” will be construed to be the calculation result as a combination of the calculation results of the first and second processing circuits, which is consistent with claim 1 at lines 6-8. Claim 3 recites the limitation “an integration result” on line 5. It is unclear if this integration result is the same integration result as recited in claim 1 line 8. Appropriate correction is required. For examination purposes, “an integration result” of claim 3 will be construed to be the same integration result recited in claim 1 line 8. Claim 3 recites the limitation “the layers” on line 4. There is lack of antecedent basis for this limitation. Appropriate correction is required. Claim 4 recites the limitation “a programmable accelerator” on line 3. It is unclear if this “programmable accelerator” is the same as the “programmable accelerator as recited in claim 1 line 5. Appropriate correction is required. For examination purposes, the programmable accelerator as recited in claim 4 will be construed to be the same programmable accelerator as recited in claim 1. Claim 8 recites the limitation “the layers” on line 4. There is lack of antecedent basis for this limitation. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5 and 9-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by A. Almahali et al. ("FPGA-Accelerated Hadoop Cluster for Deep Learning Computations", 2015) included in the IDS filed on 07/29/2022 (hereinafter “Almahali”). Regarding claim 1, Almahali teaches: An information processing circuit comprises: a first information processing circuit that performs layer operations in deep learning (CNN comprised of FPGA, the various nodes of the FPGA labeled as Mapper (as first and second information processing circuits) being used for layer operations in deep learning, shown in Pg. 571 Fig. 4 and discussed on Pg. 570 Col. 2 section B having as outputs trained weight parameters, FPGA as programmable accelerator; Pg. 566 Col. 1 third paragraph CNN operations run on various computing nodes within the FPGA, i.e., the circuitries as discussed with respect to Fig. 4); a second information processing circuit that performs the layer operations in deep learning on input data by means of a programmable accelerator (CNN comprised of FPGA, the various nodes of the FPGA labeled as Mapper (as first and second information processing circuits) being used for layer operations in deep learning, shown in Pg. 571 Fig. 4 and discussed on Pg. 570 Col. 2 section B having as outputs trained weight parameters, FPGA as programmable accelerator; Pg. 566 Col. 1 third paragraph CNN operations run on various computing nodes within the FPGA, i.e., the circuitries as discussed with respect to Fig. 4; Pg. 570 Fig. 3 additionally shows the hardware configuration); and an integration circuit integrates a calculation result of the first information processing circuit with a calculation result of the second information processing circuit, and output an integration result (Pg. 571 Fig. 4 Reducer as integration circuit, takes calculation results from 1st and 2nd info processing circuits and produces an integration result, also discussed on Pg. 571 Col.2 Section B; Pg. 570 Fig. 3 additionally shows the hardware configuration), wherein the first information processing circuit includes: a parameter value output circuit in which parameters of deep learning are circuited (Pg. 571 Fig. 5 shows kernel of FPGA convolutions, weights FIFO output as parameter value output circuit for directing of CNN weights i.e., parameters); and a sum-of-product circuit that performs a sum-of-product operation using the input data and the parameters (Pg. 570 Col. 2 Section B Lines 16-19 reducer combines weights after training i.e., a sum; Pg. 571 Col. 2 Fig. 5 sum-of-products operations occurring with respect to weights i.e., parameters and input values; Pg. 571 Col. 1 last paragraph and Col. 2 first paragraph describe the sum-of-products circuitry). Regarding claim 2, Almahali teaches: The information processing circuit according to claim 1, wherein the integration circuit accepts the calculation results of the first information processing circuit and the second information processing circuit as inputs, integrates the calculation results by calculating a weighted sum of accepted inputs, and output the integration result (Pg. 571 Fig. 5 takes input of weights i.e., parameters from mappers to kernel depicted (this kernel occurring in reducer), and carries out a weighted sum to produce the integration result which is output to the results FIFO). Regarding claim 3, Almahali teaches: The information processing circuit according to claim 1, wherein the integration circuit accepts the calculation results of the first information processing circuit and the second information processing circuit as inputs to the layers in deep learning, and outputs a calculation result based on the accepted inputs as an integration result (Pg. 571 Col. 2 Section D computations carried out for layer of CNN for deep learning operations, outputting the calculation results as discussed in Pg. 571 Col.2 Section B). Regarding claim 4, Almahali teaches: The information processing circuit according to any one of claim 1, wherein the integration circuit performs layer operations in deep learning by means of a programmable accelerator (Pg. 570 Col. 2 section B, FPGA as programmable accelerator for layer operations in deep learning). Regarding claim 5, Almahali teaches: The information processing circuit according to any one of claim 1, wherein the integration circuit inputs the same input data as the input data accepted by the first information processing circuit and the second information processing circuit, and weights calculation results of the first information processing circuit and the second information processing circuit based on weighting parameters determined according to the input data (Pg. 571 Fig. 5 shows kernel of FPGA convolutions, weights FIFO output as parameter value output circuit for directing of CNN weights i.e., parameters as determined by training via first and second processing circuits (nodes/Mappers), weighted sum carried out with respect to weights and inputs from the Image FIFO output, the image data as input data that was also used to train the weight data, as such it was input to the Mapper nodes i.e., first and second processing circuits). Claims 9-10 recite the method practiced by the information processing circuit of claims 1-2 respectively and are therefore rejected for the same reasons therein. Claims 11-12 recite the non-transitory computer readable recording medium storing a program executing deep learning with instructions to execute the method practiced by the information processing circuit of claims 1-2 respectively and are therefore rejected for the same reasons therein. Almahali additionally discloses a processor and memory, which can be used for storing program instructions to be carried out on the FPGA (Almahali: Pg. 570 Col 2 Section C and PG. 571 Col. 1 first paragraph). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Almahali further in view of Huang et al. (US 2020/0410337 A1) (hereinafter “Huang”). Regarding claim 6, while Almahali teaches the information processing circuit according to claim 1, as well as a first and second information processing circuit for outputting respective calculation results in deep learning (Almahali: CNN comprised of FPGA, the various nodes of the FPGA labeled as Mapper (as first and second information processing circuits) being used for layer operations in deep learning, shown in Pg. 571 Fig. 4 and discussed on Pg. 570 Col. 2 section B having as outputs trained weight parameters, FPGA as programmable accelerator; Pg. 566 Col. 1 third paragraph CNN operations run on various computing nodes within the FPGA, i.e., the circuitries as discussed with respect to Fig. 4) and an integration circuit for outputting an integration result (Almahali: Pg. 571 Fig. 4 Reducer as integration circuit, takes calculation results from 1st and 2nd info processing circuits and produces an integration result, also discussed on Pg. 571 Col.2 Section B; Pg. 570 Fig. 3 additionally shows the hardware configuration), Almahali does not explicitly teach these operations occurring in an intermediate layer for an intermediate layer. However, Huang teaches neural network with multiple processing nodes including input layers, intermediate layers (hidden layers) and output layer, each of the layers carrying out sum of products operations (Huang: ¶ 0036). In combining Almahali with Huang, this would allow for all layers of deep learning of Almahali to have the same structure in order to carry out sum of products operations. It would be obvious to combine the intermediate layers for sum of product operations as taught by Huang with the information processing circuit structure as taught by Almahali as both teachings are directed towards sum of product operations in a deep learning setting. One with ordinary skill in the art would be motivated to combine the teachings because this would allow for the circuit to take in many more inputs and carry out operations with many more weights than if the structure were only on one layer (Huang: ¶ 0036). Almahali in view of Huang therefore teaches: The information processing circuit according to claim 1, wherein the first information processing circuit outputs a calculation result of an intermediate layer in deep learning, the second information processing circuit performs the layer operation in deep learning using the calculation result of the intermediate layer as input data; and the integration circuit integrates calculation results of the intermediate layer, the calculation result of the first information processing circuit and the calculation result of the second information processing circuit, and outputs the integration result. Regarding claim 7, Almahali in view of Huang further teaches: The information processing circuit according to claim 6, wherein the first information processing circuit outputs an output from the intermediate layer that performs feature extraction as the calculation result (Huang: ¶ 0056 CNN operations at of the layers are used for extraction of features of input image). It would be obvious to combine the feature extraction at an intermediate layer as taught by Huang with the information processing circuit structure as taught by Almahali as both teachings are directed towards sum of products operations for deep learning. One with ordinary skill in the art would be motivated to combine the teachings in order to be able to obtain an output feature map for neural network operations (Huang: ¶ 0056). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Almahali further in view of Wang et al. (US 2020/0334539) (hereinafter “Wang”). Regarding claim 8, while Almahali teaches the information processing circuit according to claim 1, as well as a learning circuit for learning of parameters (Almahali: Pg. 572 Col. 1 Section B system is trained via MNIST learning system, MNIST classifies data based on labels, and this is used in order to train the network to train and learn parameters i.e., weights), Almahali does not explicitly teach the learning circuit correcting based on a difference between a calculation result and a correct answer label. However, Wang teaches a calculation result is for a sum of products operation (Wang: ¶ 0098), and the difference between the predicted outcome i.e., calculation result and the label i.e., correct label answer is computed, tuning being carried out in order to training to be more accurate (Wang: ¶ 0078). It would be obvious to combine the difference calculation in order to tune the learning of parameters as taught by Wang with the information processing circuit as taught by Almahali as both teachings are directed towards sum of products operations in deep learning. One with ordinary skill in the art would be motivated to combine the teachings because this would determine an accuracy of the neural network and training, and correct parameters appropriately (Wang: ¶ 0071). Prior Art Made of Record The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Z. Qin (“ThunderNet: Towards Real-time Generic Object Detection, 2019) teaches a two-stage detector for CNN-based detectors, including Spatial Attention Module (SAM) for reweighting the feature map. Tomita (US 2017/0228634 A1) teaches an arithmetic processing circuit including a first layer for a recognition neural network, having FIFOs and LUTs as well as multiplication and addition circuits in order to carry out multiply-accumulate operations and learning of parameters. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA DE JESUS RIVERA whose telephone number is (571)272-2793. The examiner can normally be reached Monday-Friday 7:30AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571) 272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.D.R./Examiner, Art Unit 2151 /EMILY E LAROCQUE/ Primary Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Jul 29, 2022
Application Filed
Dec 22, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596553
TECHNIQUE FOR SPECULATIVELY GENERATING AN OUTPUT VALUE IN ANTICIPATION OF ITS USE BY DOWNSTREAM PROCESSING CIRCUITRY
2y 5m to grant Granted Apr 07, 2026
Patent 12596528
MULTIPURPOSE MULTIPLY-ACCUMULATOR ARRAY
2y 5m to grant Granted Apr 07, 2026
Patent 12580553
APPARATUS, METHOD, AND PROGRAM FOR POWER STABILIZATION THROUGH ARITHMETIC PROCESSING OF DUMMY DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12572619
MATRIX PROCESSING ENGINE WITH COUPLED DENSE AND SCALAR COMPUTE
2y 5m to grant Granted Mar 10, 2026
Patent 12566952
MULTIPLIER BY MULTIPLEXED OFFSETS AND ADDITION, RELATED ELECTRONIC CALCULATOR FOR THE IMPLEMENTATION OF A NEURAL NETWORK AND LEARNING METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+35.1%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month