Prosecution Insights
Last updated: April 19, 2026
Application No. 18/328,635

NEURAL NETWORKS PROCESSING UNITS REDUNDANCY REMOVAL

Non-Final OA §102
Filed
Jun 02, 2023
Examiner
EL-HAGE HASSAN, ABDALLAH A
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Neuronix AI Labs Inc.
OA Round
1 (Non-Final)
40%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
80%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
107 granted / 267 resolved
-11.9% vs TC avg
Strong +40% interview lift
Without
With
+39.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
311
Total Applications
across all art units

Statute-Specific Performance

§101
48.8%
+8.8% vs TC avg
§103
29.4%
-10.6% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 267 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013 is being examined under the first inventor to file provisions of the AIA . Status of the Application This action is a first action on the merits in response to the application filed on 03/30/2015. Status of Claims Claims 1-8 filed on 06/02/2023 are currently pending and have been examined in this application. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a) (1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Chen et al. (US 20190122113 A1). Regarding claim 1. Chen teaches A method of removing redundancy of multiplications of a same weight with different activations before adding all multiplication results together, comprising: adding all activations that need to be multiplied with a common weight to generate a sum of activations; and multiplying the sum of activations with the common weight [Chen teaches or focuses on identifying and removing redundant weights in a neural network before they enter the multiplication phase. By eliminating these weights, the system omits the original calculation entirely, thereby reducing the number of multiplications required. See Chen para. 0013 “Embodiments described herein can directly reduce the redundancy of a CNN to obtain a compact but powerful CNN model”, para. 0011 “Various techniques have been employed to demonstrate the significant redundancy in the parameterization of such deep learning models, such as examining the sparsity of weights and compressed CNNs by combining pruning, quantization, and Huffman coding”]. Regarding claim 2. further comprising generating different combinations of vector multiplication tensors for machine learning models or algorithms [Chen, para. 0026, Chen teaches “feature selection methods are typically applied on 1D feature vectors, and it is sub-optimal to apply feature selections on the flattened feature vectors from 3D tensors extracted by convolutional layer” wherein generating different combinations of vector multiplication tensors]. Regarding claim 3. further comprising supporting at least one of multiple different parallel modes including at least one of: a multiple points (pixels) parallel scheme, a lines parallel scheme, a multiple input channels parallel scheme, or a multiple output channels parallel scheme [Chen, para. 0021, figure1 and figure 2, Chen teaches “FIG. 2 illustrates a convolutional neural network, according to one embodiment described herein. As shown, the CNN 200 includes an input layer 210, a convolutional layer 215, a subsampling layer 220, a convolutional layer 225, subsampling layer 230, fully connected layers 235 and 240, and an output layer 245. The input layer 210 in the depicted embodiment is configured to accept a 32×32-pixel image” wherein supporting at least one of multiple different parallel modes including at least one of: a multiple points (pixels) parallel scheme]. Regarding claim 4. further comprising implementing a sequential execution NPU, a concurrent execution NPU, or a combination of a sequential execution NPU and a concurrent execution NPU to implement the vector multiplication [Chen, para. 0056, figure1, Chen teaches “For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved” wherein concurrent or sequential execution]. Regarding claim 5. wherein: implementing a sequential execution NPU comprises storing back (feedback) an output of each neural network layer to a current AMM layer; [Chen, para. 0029, Chen teaches “In the method 300, the DCNN optimization component 140 measures the importance of the feature extractors (block 320). For example, the DCNN optimization component 140 can consider the output of the Inf-FS analysis as the importance score of each feature. In one embodiment, the responses of each neuron or each position's value is computed by convolutions. The DCNN optimization component 140 can map the importance of feature extractors by leveraging the weights of a CNN. For a neuron A, we back propagate the neuron's importance score to the neurons in the previous layer that are either fully connected (FC) layers or locally connected (convolutional layers) to A” wherein “For example, the DCNN optimization component 140 can consider the output of the Inf-FS analysis as the importance score of each feature... For a neuron A, we back propagate the neuron's importance score to the neurons in the previous layer that are either fully connected (FC) layers or locally connected (convolutional layers) to A” is equivalent storing back (feedback) an output of each neural network layer to a current AMM layer] and implementing a concurrent execution NPU comprises allocating different hardware resources to different DNN layers to process the DNN layers in parallel (concurrently) [Chen, para. 0056, figure1, Chen teaches “It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions” wherein allocating different hardware resources to different DNN layers]. Regarding claim 6. wherein: implementing a sequential execution NPU comprises reusing hardware resources to calculate different layers of a same neural network; and implementing a concurrent execution NPU comprises providing results of each DNN layer to another hardware logic that executes a next DNN layer [Chen, para. 0056, figure1, Chen teaches “It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions” wherein “combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions” is equivalent to providing results of each DNN layer to another hardware logic that executes a next DNN layer. This is also obvious in figure 1 where it shows multiple NPU1…NPUk]. Regarding claim 7. further comprising supporting different size convolution operations [Chen, para. 0022, Chen teaches “The convolutional layer may include k filters (or kernels) of size a by b by c, where a by b is smaller than x by y, and c is less than or equal to z (and may vary for various kernels)” wherein different size convolution operations]. Regarding claim 8. wherein supporting different size convolution operations comprises supporting two different n*n convolution operations, and wherein n in a first of the convolution operations has a first value that is different than a second value of n in a second of the convolution operations [Chen, para. 0022, Chen teaches “The convolutional layer may include k filters (or kernels) of size a by b by c, where a by b is smaller than x by y, and c is less than or equal to z (and may vary for various kernels). Generally, the size of filters k leads to a locally connected structure, which is convolved with the image to produce k feature maps. Additionally, each map can be subsampled over contiguous regions of various sizes (e.g., 2×2 may be appropriate for small images” wherein n*n convolution]. Conclusion Any inquiry concerning this communication from the examiner should be directed to Abdallah El-Hagehassan whose contact information is (571) 272-0819 and Abdallah.el-hagehassan@uspto.gov The examiner can normally be reached on Monday- Friday 8 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-3734. Information regarding the status of an application may be obtained from the patent application information retrieval (PAIR) system. Status information of published applications may be obtained from either private PAIR or public PAIR. Status information of unpublished applications is available through private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have any questions on access to the private PAIR system, contact the electronic business center (EBC) at (866) 271-9197 (toll-free). If you would like assistance from a USPTO customer service representative or access to the automated information system, call (800) 786-9199 (in US or Canada) or (571) 272-1000. /ABDALLAH A EL-HAGE HASSAN/ Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Jun 02, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596980
INSPECTION SYSTEM AND INSPECTION METHOD FOR BUILDING COMPONENTS OF BUILDING STRUCTURES BASED ON COMMON DATA ENVIRONMENT AND BUILDING INFORMATION MODELING
2y 5m to grant Granted Apr 07, 2026
Patent 12578995
Distributed Actor-Based Information System and Method
2y 5m to grant Granted Mar 17, 2026
Patent 12572995
SYSTEMS AND METHODS FOR PLAN DETERMINATION
2y 5m to grant Granted Mar 10, 2026
Patent 12566862
METHOD AND SYSTEM FOR DYNAMICALLY ASSESSING CURRENT RISK ASSOCIATED WITH A MARITIME ACTIVITY
2y 5m to grant Granted Mar 03, 2026
Patent 12511599
COMPUTER NETWORK WITH A PERFORMANCE ENGINEERING MATURITY MODEL SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
40%
Grant Probability
80%
With Interview (+39.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 267 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month