Prosecution Insights
Last updated: April 20, 2026
Application No. 17/357,687

METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS TO IMPROVE ALGORITHMIC SOLVER PERFORMANCE

Final Rejection §101§102§103
Filed
Jun 24, 2021
Examiner
MENGISTU, TEWODROS E
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
77%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
62 granted / 127 resolved
-6.2% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
34 currently pending
Career history
161
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 127 resolved cases

Office Action

§101 §102 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending for examination. Claims 1, 7, 13, and 19 are independent. Response to Amendment The office action is responsive to the amendments filed on 07/29/2025. As directed by the amendments claims 1, 7, 13, and 19 are amended. Response to Arguments Applicant's arguments filed 07/29/2025 have been fully considered but they are not fully persuasive. Applicant arguments regarding 35 U.S.C. § 101: No mental process is recited in new claim 21 as that term is defined in the MPEP. Claim 21 sets forth a specialized sequence of operations that cannot be done by the human mind. For instance, unlike the erroneous allegations that the subject matter is merely a "judgement ... to generate vectors" or "to reduce the error of a function" (Id), new claim 21 is focused on updating weights of a neural network. Updating weights of a neural network are outside the realm of activities which can practically be performed in the human mind alone. See for example para. [0013] of the instant application explaining that voluminous quantities of graph input data in which known solution techniques "consume relatively long periods of time and suffer from error due to human discretion that... accompanies heuristic approaches" that are "deemed impractical to be performed by a human with pen and paper." (Id). Stated differently, a mental process (e.g., the alleged judgements referred to by the Office action) is a source of a problem that is eliminated by the claimed subject matter, precisely by moving away from processes performed in the human mind.[…] Examiner response: Examiner respectfully disagrees, no exact method for transforming is given. Under broadest reasonable interpretation, generating vector representation could be performed mentally with pen and paper. Further operations include mathematical calculations and vector operations. These continue to recite abstract ideas as detailed in the updated 101 rejection below. Applicant further argues: For example, claim 21 of the instant application sets forth "execute a second algorithm to update weights of a neural network based on the ranked nodes." The claimed apparatus is structured to implement a practical solution to the problems of "traditional approaches, like heuristics and standard output and loss mechanisms" which result in conditions where "effective network learning cannot occur because the network behaves in a confused manner when there are multiple equivalent optimal solutions ... which does not yield a useful or accurate solution." (Id). The claimed solution overcomes the issues of such traditional approaches by programming at least one processor circuit to associate second features to nodes associated with the second quantity of node class probabilities, the second features different than the first features. This is a real world advancement that improves the performance of a machine, such as the novel hybrid pipeline of FIG. 1. It has nothing to do with mental processes and cannot be accomplished with pen and paper to achieve the intended results. Known solutions have already exhibited the problematic results when human mind efforts (e.g., heuristics) are applied, and the subject matter of claim 21 removes any such human element. Examiner response: Examiner respectfully disagrees, MPEP §2106.05(a) reads: During examination, the examiner should analyze the "improvements" consideration by evaluating the specification and the claims to ensure that a technical explanation of the asserted improvement is present in the specification, and that the claim reflects the asserted improvement. However, the MPEP (§2106.05(a)(II)) also warns, “it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology.” Here, the alleged improvement in the form of optimizing a mathematical calculation which is an improvement to the abstract idea and is not an improvement in technology. Applicant arguments regarding 35 U.S.C. § 103: Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection. Claim Objections Claim 1 objected to because of the following informalities: Claim 1 on page 3, line 2 has a period at the end and does not show the previous semicolon. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 According to the first part of the analysis, in the instant case, claims 1-6 are directed to an apparatus, claims 7-12 are directed to a machine-readable storage medium, claims 13-18 are directed to an apparatus and claims 19-20 is directed to a method. Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding Claim 1: 2A Prong 1: generate a vector representation corresponding to a graph input, the vector representation having first features. (This step for generating a vector is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) generate node embedding classification instructions, the node embedding classification instructions to cause an output layer of probabilities corresponding to nodes of the graph input, the node embedding classification including a first quantity of node class probabilities based on first features of the vector representation; (This step for generating node embedding classification instructions is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) (This step for identifying a second quantity is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) associate second features to nodes associated with the second quantity of node class probabilities, the second features different than the first features; (This step for associating features to nodes is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment).) rank the nodes associated with the second quantity of node class probabilities; (This step for ranking nodes is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) calculate one or more solutions based on ranked ones of the output layer of probabilities; (This step for calculating solutions is understood to be a recitation of mathematical concepts (i.e., mathematical calculation).) and 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: An apparatus, comprising: interface circuitry to access a graph input; machine-readable instructions; and at least one processor circuit to be programmed by the machine-readable instructions to: (The “apparatus” “interface circuitry” “machine-readable instructions” and “processor” are understood to be generic computer equipment. See MPEP 2106.05(f).) execute a first algorithm to (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a generic algorithm as a tool to perform the abstract idea (i.e., identifying) - see MPEP 2106.05(f).) execute a second algorithm to update weights of a neural network based on the ranked nodes. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a generic algorithm as a tool to perform the abstract idea - see MPEP 2106.05(f).) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are generic computer functions that are implemented to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: An apparatus, comprising: interface circuitry to access a graph input; machine-readable instructions; and at least one processor circuit to be programmed by the machine-readable instructions to: (The “apparatus” “interface circuitry” “machine-readable instructions” and “processor” are understood to be generic computer equipment. See MPEP 2106.05(f).) execute a first algorithm to (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a generic algorithm as a tool to perform the abstract idea (i.e., identifying) - see MPEP 2106.05(f).) execute a second algorithm to update weights of a neural network based on the ranked nodes. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a generic algorithm as a tool to perform the abstract idea - see MPEP 2106.05(f).) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding Claim 7: see the rejection of claim 1 above. Same rationale applies. 2A Prong 2 & 2B: The claim recites another additional element “At least one machine-readable storage medium comprising instructions that, when executed, cause at least one processor to at least:” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)) Regarding Claim 13: see the rejection of claim 1 above. Same rationale applies. 2A Prong 2 & 2B: The claim recites another additional element “An apparatus, comprising: graph transforming circuitry, vector classification circuitry, loss calculating circuitry” (mere instructions to apply the exception using a generic computer component - see MPEP 2106.05(f)) Regarding Claim 19: see the rejection of claim 1 above. Same rationale applies. Regarding Claims 2, 8, and 20 2A Prong 1: (This step for is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2 & 2B: wherein the processor circuitry is to (This limitation is understood to be a generic computer equipment. See MPEP 2106.05(f).) Regarding Claims 3, 9, and 15 2A Prong 1: (This step is understood to be a recitation of mathematical concepts (i.e., mathematical calculation).) 2A Prong 2 & 2B: wherein the processor circuitry is to (This limitation is understood to be a generic computer equipment. See MPEP 2106.05(f).) Regarding Claim 4 2A Prong 1: (This step for is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2 & 2B: wherein the processor circuitry is to (This limitation is understood to be a generic computer equipment. See MPEP 2106.05(f).) Regarding Claims 5, 11, and 17 2A Prong 1: (This step is understood to be a recitation of mathematical concepts (i.e., mathematical calculation).) 2A Prong 2 & 2B: wherein the processor circuitry is to (This limitation is understood to be a generic computer equipment. See MPEP 2106.05(f).) Regarding Claims 6, 12, and 18 2A Prong 1: (This step for is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2 & 2B: wherein the processor circuitry is to (This limitation is understood to be a generic computer equipment. See MPEP 2106.05(f).) Regarding Claim 10 2A Prong 1: wherein the instructions, when executed, cause the at least one processor to form a hybrid pipeline with a graph embedding stage, a node embedding stage, and an algorithmic solver stage. (This step for is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2 & 2B: The claim does not recite any additional elements. Regarding Claim 14 2A Prong 1: wherein the ranked ones of the output layer of probabilities correspond to a minimum loss error. (This step for is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2 & 2B: The claim does not recite any additional elements. Regarding Claim 16 2A Prong 1: wherein the graph transforming circuitry, the vector classification circuitry and the algorithmic solving circuitry form a hybrid pipeline.(This step for is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) 2A Prong 2 & 2B: The claim does not recite any additional elements. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 6-9, 12-15, and 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Dai et al. (Learning Combinatorial Optimization Algorithms over Graphs, hereinafter "Dai"). Regarding Claim 1 Dai discloses: An apparatus, comprising: interface circuitry to access a graph input; machine-readable instructions; and at least one processor circuit to be programmed by the machine-readable instructions to: ([Section 5.3 and page 2 section 2] discloses using GPUs and sampling graph input (i.e. access graph input).) generate a vector representation corresponding to a graph input, the vector representation having first features. ([Page 2 section 2. Algorithm representation and Page 4 section 3.1 Structure2Vec] “More specifically, structure2vec defines the network architecture recursively according to an input graph structure G, and the computation graph of structure2vec is inspired by graphical model inference algorithms, where node-specific tags or features xv are aggregated recursively according to G’s graph topology. A”.) generate node embedding classification instructions, the node embedding classification instructions to cause an output layer of probabilities corresponding to nodes of the graph input, the node embedding classification including a first quantity of node class probabilities based on first features of the vector representation; ([Page 4 section 3.1 Structure2Vec and equation 2] “This graph embedding network will compute a p-dimensional feature embedding µv for each node v ϵ V, […] µv(t+1) ← F(xv,{µu(t) }uϵN(v), {w(v,u)} uϵN(v); Θ), […] F is a generic nonlinear mapping such as a neural network or kernel function.” Examiner interprets F as the node embedding classification.) execute a first algorithm to identify a second quantity of the node class probabilities; ([Page 3 Section 2, page 7 Baseline Algorithms, Figure 1, and Algorithm 1] describes executing a greedy algorithm (i.e. first algorithm) for identifying node scores that maximize evaluations (i.e. second quantity of the node class).) associate second features to nodes associated with the second quantity of node class probabilities, the second features different than the first features; ([Page 3 Section 2, page 7 Baseline Algorithms, Figure 1, and Algorithm 1] Examiner interprets the best node scores correspond to feature xv which are different from the features that are not considered the best.) rank the nodes associated with the second quantity of node class probabilities; ([Page 3 Section 2] “A generic greedy algorithm selects a node v to add next such that v maximizes an evaluation function, Q(h(S),v) ϵ R, which depends on the combinatorial structure h(S) of the current partial solution. Then, the partial solution S will be extended as S :=(S, v*), where v *   : = a r g m a x v ∈ S - q ( h s ,   v ) , and (S, v*) denotes appending v* to the end of a list S. This step is repeated until a termination criterion t(h(S)) is satisfied.” Examiner interprets the ordered list S as a rank of nodes v associated with a max of an evaluation function (i.e. second quantity of node class probabilities).) calculate one or more solutions based on ranked ones of the output layer of probabilities ([Page 3 section 2, Section 4.2, and Figure 1] describes calculating an evaluation function Q(h(S)) which is based on S (i.e. ranked probabilities).); and execute a second algorithm to update weights of a neural network based on the ranked nodes. ([Section 4.2, Section 5, Equation 6, Algorithm 1] describes updating parameter Θ (i.e. weights) with stochastic gradient descent (i.e. second algorithm) of a neural network.) Regarding Claim 7 Dai discloses: At least one machine-readable storage medium comprising instructions that, when executed, cause at least one processor to at least: ([Section 5.3] discloses using GPUs) (Claim 7 is a machine-readable storage medium claim that corresponds to claim 1 and the rest of the limitations are rejected on the same ground) Regarding Claim 13 Dai discloses: An apparatus, comprising: [Section 5.3] discloses using GPUs) (Claim 13 is a system claim that corresponds to claim 1 and the rest of the limitations are rejected on the same ground) Regarding Claim 2 Dai discloses: The apparatus as defined in claim 1, wherein the processor circuitry is to link the ranked ones of the output layer of probabilities to a minimum loss error ([Section 4.2, Equation 6, and Algorithm 1] describe a minimizing the squared loss). Regarding Claim 3 Dai discloses: The apparatus as defined in claim 1, wherein the processor circuitry is to softmax the output layer of the node embedding classification instructions to generate the output layer of probabilities. ([Section 3.2] “There, the output of the embedding is linked with a softmax-layer, so that the parameters can by trained end-to-end by minimizing the cross-entropy loss.”) Regarding Claim 6 Dai discloses: The apparatus as defined in claim 1, wherein the processor circuitry is to improve model accuracy by injecting node features into nodes of the graph input. ([Pages 2-3, Section 3.2, and Figure 1] describes adding best nodes.) Regarding Claim 8 (Claim 8 recites analogous limitations to claim 2 and therefore is rejected on the same ground as claim 2.) Regarding Claim 9 (Claim 9 recites analogous limitations to claim 3 and therefore is rejected on the same ground as claim 3.) Regarding Claim 12 (Claim 12 recites analogous limitations to claim 6 and therefore is rejected on the same ground as claim 6.) Regarding Claim 14 Dai discloses: The apparatus as defined in claim 13, wherein the ranked ones of the output layer of probabilities correspond to a minimum loss error. ([Section 4.2, Equation 6, and Algorithm 1] describe a minimizing the squared loss) Regarding Claim 15 (Claim 15 recites analogous limitations to claim 3 and therefore is rejected on the same ground as claim 3.) Regarding Claim 18 (Claim 18 recites analogous limitations to claim 6 and therefore is rejected on the same ground as claim 6.) Regarding Claim 19 Dai discloses: A method comprising: (Claim 13 is a system claim that corresponds to claim 1 and the rest of the limitations are rejected on the same ground) Regarding Claim 20 (Claim 20 recites analogous limitations to claim 2 and therefore is rejected on the same ground as claim 2.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 4-5, 10-11, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dai in view of Zeng et al. (GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms, hereinafter "Zeng"). Regarding Claim 4 Dai discloses: The apparatus as defined in claim 1, Dai does not explicitly disclose: wherein the processor circuitry is to form a pipeline with graph transforming circuitry, vector classification circuitry and algorithmic solving circuitry. However, Zeng discloses in the same field of endeavor: wherein the processor circuitry is to form a pipeline with graph transforming circuitry, vector classification circuitry and algorithmic solving circuitry. (Fig. 3, see above, the graph transformation circuitry mapping as ∂ L ∂ X s ( l - 1 ) → M e m .   C o n t r o l l e r → F e a t u r e   A g g r e g a t i o n → W e i g h t   T r a n s f o r m a t i o n → M e m .   C o n t r o l l e r → X s ( 0 ) , X s ( 1 ) X s ( 2 ) , the vector classification circuitry mapping as X s ( 0 ) , X s ( 1 ) , X s ( 2 ) → M e m .   C o n t r o l l e r →   ∂ L ∂ X s ( l )   , and the algorithmic solving circuitry mapping as ∂ L ∂ X s ( l ) + O p t i m i z e r   d a t a → M e m .   C o n t r o l l e r →   X M L P o u t and Fig. 4,). Dai and Zeng are both analogous art to the present invention because both are from the same field of endeavor directed to graph networks. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the method for Combinatorial Optimization Algorithms over Graphs disclosed by Dai with the method for Accelerating Training on CPU-FPGA Heterogeneous Platforms disclosed by Zeng. One of ordinary skill in the art would have been motivated to make this modification in order to train neural networks on CPU-FPGA heterogeneous systems, by incorporating multiple algorithm-architecture co-optimizations. Regarding Claim 5 Dia in view of Zeng discloses: The apparatus as defined in claim 1, wherein the processor circuitry is to apply backpropagation to the hybrid pipeline to improve an accuracy metric of the model. (page 2, right column, paragraph 4, Zeng “For the node classifier, forward pass of a MLP layer is simply X M L P o u t = R e L U   W M L P + X M L P i n where X M L P i n and X M L P o u t output features for all V , and W M L P is the layer weight. Backward pass of a MLP layer performs computation similar to Equations 2a and 2c”,) Regarding Claim 10 Dia in view of Zeng discloses: The machine-readable storage medium as defined in claim 7, wherein the instructions, when executed, cause the at least one processor to form a hybrid pipeline with a graph embedding stage, a node embedding stage, and an algorithmic solver stage. (Zeng Fig. 3, see above, the graph transformation circuitry mapping as ∂ L ∂ X s ( l - 1 ) → M e m .   C o n t r o l l e r → F e a t u r e   A g g r e g a t i o n → W e i g h t   T r a n s f o r m a t i o n → M e m .   C o n t r o l l e r → X s ( 0 ) , X s ( 1 ) X s ( 2 ) , the vector classification circuitry mapping as X s ( 0 ) , X s ( 1 ) , X s ( 2 ) → M e m .   C o n t r o l l e r →   ∂ L ∂ X s
Read full office action

Prosecution Timeline

Jun 24, 2021
Application Filed
Oct 01, 2024
Non-Final Rejection — §101, §102, §103
Dec 16, 2024
Applicant Interview (Telephonic)
Dec 16, 2024
Examiner Interview Summary
Jan 07, 2025
Response Filed
Jul 29, 2025
Response Filed
Oct 28, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566817
AUTOMATIC MACHINE LEARNING MODEL EVALUATION
2y 5m to grant Granted Mar 03, 2026
Patent 12482032
Selective Data Rejection for Computationally Efficient Distributed Analytics Platform
2y 5m to grant Granted Nov 25, 2025
Patent 12450465
NEURAL NETWORK SYSTEM, NEURAL NETWORK METHOD, AND PROGRAM
2y 5m to grant Granted Oct 21, 2025
Patent 12400252
ARTIFICIAL INTELLIGENCE BASED TRANSACTIONS CONTEXTUALIZATION PLATFORM
2y 5m to grant Granted Aug 26, 2025
Patent 12380369
HYPERPARAMETER TUNING IN AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) MODELS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
77%
With Interview (+28.2%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 127 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month