Prosecution Insights
Last updated: April 19, 2026
Application No. 18/166,840

MACHINE LEARNING-BASED BRANCHING AND DIVING FOR SOLVING COMBINATORIAL OPTIMIZATION COMPUTATIONAL PROBLEM

Non-Final OA §101§103§112
Filed
Feb 09, 2023
Examiner
HOANG, MICHAEL H
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Davidson Technologies Inc.
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
77%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
70 granted / 136 resolved
-3.5% vs TC avg
Strong +26% interview lift
Without
With
+25.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
26 currently pending
Career history
162
Total Applications
across all art units

Statute-Specific Performance

§101
30.3%
-9.7% vs TC avg
§103
45.3%
+5.3% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This action is in response to the claims filed 02/09/2023 for Application number 18/166,840 which claims priority to PRO 63/267,759 filed 02/09/2022. Claims 1-20 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 18 and 20 are objected to because of the following informalities: "unconstrainted" should read "unconstrained". Appropriate correction is required. Duplicate Claims, Warning Applicant is advised that should claims 8 and 16 be found allowable, claims 10 and 17 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 18, and 20 recites the limitation "the branching algorithm" in line 14. There is insufficient antecedent basis for this limitation in the claim. The claims previously refer to the algorithm as a “branch-and-bound” algorithm thus is unclear whether the “branching” algorithm is referring back to the same algorithm. Claims 2-17 and 19 are rejected as being dependent on a rejected base claim without curing any of the deficiencies. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1 Analysis: Claim 1 is directed to a process, which falls within one of the four statutory categories. Step 2A Prong 1 Analysis: Claim 1 recites, in part, The limitations of: performing a branch-and-bound algorithm to generate a branch- and-bound search tree of the first computational problem and to identify a single node of the branch-and-bound search tree for further evaluation for determining a complete combinatorial solution for the first computational problem, can be considered to be a mathematical calculation and determining the complete combinatorial solution for the first computational problem based on the branch-and-bound search tree, the determining of the complete combinatorial solution comprising performing an iteration that comprises: performing a diving algorithm on the branch-and-bound search tree starting from the identified single node, the diving algorithm being configured to determine a partial combinatorial solution for the first computational problem based on the identified single node can be considered to be a mathematical calculation. determining, based on the partial combinatorial solution, a second computational problem that comprises a unconstrainted binary optimization problem (UBO) can be considered to be a mathematical calculation determining an optimized solution for the second computational problem by using a quantum computer-based solver can be considered to be a mathematical calculation determining the complete combinatorial solution based on the optimized solution and the partial combinatorial solution can be considered to be a mathematical calculation These limitations as drafted, are processes that, under broadest reasonable interpretation, covers the recitation of mathematical calculations which falls within the “Mathematical concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 Analysis: This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements – “a memory storing instructions”, “one or more digital hardware processors communicatively coupled to the memory and configured by the instructions to perform operations”, “the performing of the branch-and-bound algorithm comprising using a first trained machine learning model as a branching policy for the branching algorithm”, and “the performing of the diving algorithm comprising using a second trained machine learning model as a diving policy for the diving algorithm”. Thus, this element in the claim is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim further recites: accessing a first computational problem comprising a combinatorial optimization problem. This limitation is a mere data gathering step and thus is an insignificant extra-solution activity. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim as a whole is directed to an abstract idea. Step 2B Analysis: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of utilizing a memory, one or more digital hardware processors, a first trained machine learning model and a second trained machine learning model to perform the steps of the claimed process amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Furthermore, the limitation of accessing a first computational problem comprising a combinatorial optimization problem is well-understood, routine, and conventional, as evidenced by MPEP §2106.05(d)(II)(I), “receiving or transmitting data over a network”. These limitations therefore remain insignificant extra-solution activity even upon reconsideration, and does not amount to significantly more. Even when considered in combination, these additional elements amount to mere instructions to apply the exception using generic computer components and insignificant extra-solution activity, which cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 2, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the determining of the complete combinatorial solution based on the optimized solution and the partial combinatorial solution comprises: determining an intermediate binary solution by combining the optimized solution and the partial combinatorial solution; and determining the complete combinatorial solution based on the binary intermediate solution by converting the binary intermediate solution to the complete combinatorial solution. This claim recites additional mathematical steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 3, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the iteration comprises: determining whether at least one diving criterion is satisfied for performing the iteration again; and in response to determining that the at least one diving criterion is satisfied, reperforming the iteration. This claim recites additional mental steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 4, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein a global dual bound of an optimal solution for the first computational problem is determined during performance of the branch-and-bound algorithm, wherein a global primal bound for the optimal solution is determined during performance of the diving algorithm, and wherein the at least one diving criterion comprises a relative gap value surpasses a predetermined threshold value, the relative gap value representing a gap between the global primal bound and the global dual bound. This claim recites additional mathematical steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 5, the rejection of claim 3 is further incorporated, and further, the claim recites: wherein the at least one diving criterion comprises at least one of an elapsed execution time, or a number of times the iteration was performed. This limitation amounts to more specifics of the judicial exception identified in the rejection of claim 1 above. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 6, the rejection of claim 3 is further incorporated, and further, the claim recites: wherein the operations comprise: in response to determining that the at least one diving criterion is not satisfied: determining whether at least one branching criterion is satisfied; and in response determining that at least one branching criterion is not satisfied, providing the complete combinatorial solution as a best-known solution to the first computational problem. This claim recites additional mental steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 7, the rejection of claim 3 is further incorporated, and further, the claim recites: wherein the operations comprise: in response to determining that the at least one diving criterion is not satisfied: determining whether at least one branching criterion is satisfied; and in response determining that at least one branching criterion is satisfied, performing another iteration of the branch-and-bound algorithm to identify another single node of the branch-and-bound search tree for further evaluation for the complete combinatorial solution for the first computational problem. This claim recites additional mental steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 8, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the operations comprise: wherein the quantum computer-based solver is implemented using a quantum annealer. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 9, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the determining of the optimized solution for the second computational problem by using the quantum computer-based solver comprises: converting the second computational problem to a third computational problem that comprises a quadratic unconstrained binary optimization (QUBO) problem; and submitting the third computational problem to the quantum computer- based solver to generate the optimized solution. This claim recites additional mathematical steps in addition to the judicial exception identified in the rejection of claim 1, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 10, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the quantum computer-based solver is implemented using a quantum annealer. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 11, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the quantum computer-based solver is implemented using a universal gate quantum computer. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 12, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the first trained machine learning model comprises a trained neural network. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 13, the rejection of claim 12 is further incorporated, and further, the claim recites: wherein the trained neural network is trained by performing another branch-and-bound algorithm that uses full strong branching. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 14, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the second trained machine learning model comprises a trained neural network. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 15, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the second trained machine learning model comprises a generative model. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 16, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the second trained machine learning model is trained using training data that comprises a plurality of feasible solutions for combinatorial optimization problems considered during training. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding claim 17, the rejection of claim 1 is further incorporated, and further, the claim recites: wherein the second trained machine learning model is trained using training data that comprises a plurality of feasible solutions for combinatorial optimization problems considered during training. This limitation merely generally links the judicial exception to a field of use of technological environment. Please see MPEP 2106.05(h). The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Claim 18 recites features similar to claim 1 and is rejected for at least the same reasons therein. Claim 18 additionally requires analysis for A non-transitory computer-readable medium comprising instructions that, when executed by one or more digital hardware processors of a computing device, cause the computing device to perform operations comprising… This is considered to be an additional element under Step 2A Prong 2 however this additional element amounts to mere instructions to apply the judicial exception using a generic computer component. Please see MPEP 2106.05(f). Regarding claim 19, the rejection of claim 18 is further incorporated, and further, the claim recites: wherein the determining of the optimized solution for the second computational problem by using the quantum computer-based solver comprises: converting the second computational problem to a third computational problem that comprises a quadratic unconstrained binary optimization (QUBO) problem; and submitting the third computational problem to the quantum computer- based solver to generate the optimized solution. This claim recites additional mathematical steps in addition to the judicial exception identified in the rejection of claim 18, thus recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exceptions into a practical application, nor to significantly more than the judicial exceptions. The claim is not patent eligible. Regarding Claim 20, it recites features similar to claims 1 and 18 and is rejected for at least the same reasons therein. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nair et al. ("Solving Mixed Integer Programs Using Neural Networks" cited by applicant in the IDS filed 02/09/2023) over Macready ("US 20140025606 A1", hereinafter "Macready"). Regarding claim 1, Nair teaches A system comprising: a memory storing instructions; and one or more digital hardware processors communicatively coupled to the memory and configured by the instructions to perform operations (“When using such networks with a high-dimensional edge embedding at every layer, their memory usage can become much higher than that of GCNs, which only need the adjacency matrix, and may not fit the GPU memory unless the number of layers is reduced at the cost of accuracy. GCNs are a better fit for our goal of scaling to large-scale MIPs.” [pg. 10, 3.3., ¶2]) comprising: accessing a first computational problem comprising a combinatorial optimization problem (“Our work is an instance of the broader topic of learning to solve combinatorial optimization problems” [pg. 35, top para]); performing a branch-and-bound algorithm to generate a branch-and-bound search tree of the first computational problem and to identify a single node of the branch-and-bound search tree for further evaluation for determining a complete combinatorial solution for the first computational problem, the performing of the branch-and-bound algorithm comprising using a first trained machine learning model as a branching policy for the branching algorithm (“Neural Branching: This component is mainly used to bound the gap between the objective value of the best assignment and an optimal one. MIP solvers use a form of tree search over integer variables called branch-and-bound (Land and Doig 1960) (see section 2), which progressively tightens the bound and helps find feasible assignments. The choice of the variable to branch on at a given node is a key factor in determining search efficiency (Achterberg et al. 2005, Glankwamdee and Linderoth 2011, Schubert 2017, Yang et al. 2019). We train a deep neural network policy to imitate choices made by an expert policy (“branching policy”)” [pg. 3, ¶2]); and determining the complete combinatorial solution for the first computational problem based on the branch-and-bound search tree, the determining of the complete combinatorial solution (“A common procedure to solve MIPs is to recursively build a search tree, with partial integer assignments at each node, and use the information gathered at each node to eventually converge on an optimal (or close to optimal) assignment” [pg. 6, 2.2. Branch-and-bound]) comprising performing an iteration that comprises: performing a diving algorithm on the branch-and-bound search tree starting from the identified single node, the diving algorithm being configured to determine a partial combinatorial solution for the first computational problem based on the identified single node, the performing of the diving algorithm comprising using a second trained machine learning model as a diving policy for the diving algorithm (“Diving refers to a set of primal heuristics that explore the branch-and-bound tree in a depth-first manner by sequentially fixing integer variables until a leaf node is reached or the assignment is deemed infeasible… First, diving can be started from any node, but in this work we focus on diving only from the root node, (“identified single node”) though in principle it could be performed from other nodes. Secondly, diving typically descends all the way to a leaf node, but in this case we descend only partially (“partial combinatorial solution”) and then use a MIP solver to solve the remaining sub-MIP” [pg. 14, 6. Neural Diving, Note: Nair further discloses “training a generative model over assignments” thus would teach the “diving algorithm using a second trained machine learning model as a diving policy” pg. 14, ¶1]); determining the complete combinatorial solution based on the optimized solution and the partial combinatorial solution (“A (complete) assignment is any point x ∈ R n . A partial assignment is when we have fixed some, but not all, of the variable values. A feasible assignment is an assignment that satisfies all the constraints in problem (1), and an optimal assignment, or a solution, is a feasible assignment that also minimizes the objective” [pg. 6, 2. Integer programming background]). However Nair fails to explicitly teach determining, based on the partial combinatorial solution, a second computational problem that comprises a unconstrainted binary optimization problem (UBO); determining an optimized solution for the second computational problem by using a quantum computer-based solver Macready teaches determining, based on the partial combinatorial solution, a second computational problem that comprises a unconstrainted binary optimization problem (UBO) (“…wherein optimizing the objective for a third set of values for the Boolean weights includes mapping the objective to a second quadratic unconstrained binary optimization ("QUBO") problem and using a quantum processor to at least approximately minimize the second QUBO problem.” [¶0017]); determining an optimized solution for the second computational problem by using a quantum computer-based solver (“As such, quantum annealing may be implemented as a heuristic technique, where low-energy states with energy near that of the ground state may provide approximate solutions to the problem.” [¶0012; See algorithm ¶0053 discloses “QUBO solver”]) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Nair’s teachings by solving a second computational problem such as an unconstrained binary optimization problem with a quantum solver as taught by Macready. One would have been motivated to make this modification as Quadratic Unconstrained Binary Optimization Problems are known in the art and applications arise in many fields for example machine learning, pattern matching, economics and finance, and statistical mechanics. [¶0013, Macready] Regarding claim 2, Nair/Macready teaches The system of claim 1, wherein the determining of the complete combinatorial solution based on the optimized solution and the partial combinatorial solution comprises: determining an intermediate binary solution by combining the optimized solution and the partial combinatorial solution (“We now combine Neural Branching and Neural Diving into a single solver. As we shall see, this results in significant speedups over Tuned SCIP.” [pg. 30, §8. Joint Evaluation]); and determining the complete combinatorial solution based on the binary intermediate solution by converting the binary intermediate solution to the complete combinatorial solution. (“This approach also offers practical computational advantages: both the sampling of predictions, and the solution search afterwards are fully parallelizable. We can repeatedly and independently extract many different samples from our model, and each partial assignment of the sample can be independently solved. For a single MIP instance, we can generate many such partial assignments, each of which we can independently solve with SCIP. The feasible solution with the best objective value across all sub-MIPs is then reported as the final solution to the original input MIP.” [pg. 18, ¶4, note; Nair’s teaching of intermediate solutions and finding the feasible solution would teach determining the complete combinatorial solution when combined with the Macready’s unconstrained binary optimization problem]) Same motivation to combine the teachings of Nair/Macready as in claim 1. Regarding claim 3, Nair/Macready teaches The system of claim 1, where Nair teaches wherein the iteration comprises: determining whether at least one diving criterion is satisfied for performing the iteration again; and in response to determining that the at least one diving criterion is satisfied, reperforming the iteration. (“Our approach applies learning to the two key sub-tasks of a MIP solver: a) output an assignment of values to all variables that satisfy the constraints (if such an assignment exists), and b) prove a bound for the gap in objective value between that assignment and an optimal one. (“diving criterion”) They define the main components of our approach, Neural Diving and Neural Branching (see figure 1).” [top of page 3; See further pg. 23, “To generate each such instance, we applied 10 iterations of each step (1) and (2) on the original target instance.” ]) Regarding claim 4, Nair/Macready teaches The system of claim 3, where Nair teaches wherein a global dual bound of an optimal solution for the first computational problem is determined during performance of the branch-and-bound algorithm, wherein a global primal bound for the optimal solution is determined during performance of the diving algorithm, and wherein the at least one diving criterion comprises a relative gap value surpasses a predetermined threshold value, the relative gap value representing a gap between the global primal bound and the global dual bound. (“When running branch-and-bound we keep track of the global primal bound (the minimum objective value of any feasible assignment) and the global dual bound (the minimum dual bound across all leaves of the branch-and-bound tree). We can combine these to define a sub-optimality gap gap = global primal bound − global dual bound… In practice we terminate branch-and-bound when the relative gap (i.e., normalized in some way, see §5) is below some application-dependent quantity, and produce the best found primal solution as the approximately optimal solution.” [pg. 7, §2.4. Primal-dual gap, ¶1]) Regarding claim 5, Nair/Macready teaches The system of claim 3, where Nair teaches wherein the at least one diving criterion comprises at least one of an elapsed execution time (See pg. 56, 12.7, “elapsed wall clock time to solve MIP”), or a number of times the iteration was performed. (“In all cases we ran the ADMM solver for 100 iterations and used the LP solution of the root node as a warm-start with which to initialize the ADMM solver. First, as a concrete example, consider the air05 binary MIP which at the root node (after presolve and any reductions that happens at the root node) has 6.1k variables, 426 constraints.” [pg. 50, 12.5.1. GPU Speedups]) Regarding claim 6, Nair/Macready teaches The system of claim 3, where Nair teaches wherein the operations comprise: in response to determining that the at least one diving criterion is not satisfied: determining whether at least one branching criterion is satisfied; and in response determining that at least one branching criterion is not satisfied, providing the complete combinatorial solution as a best-known solution to the first computational problem. (“We now combine Neural Branching and Neural Diving into a single solver. As we shall see, this results in significant speedups over Tuned SCIP. We use the learned heuristics by integrating them into SCIP via the interface provided by PySCIPOpt… To make a comparison based on Neural Diving fairer in terms of compute resources, we only consider its sequential version here. For a single MIP solve, Neural Diving (Sequential) and Neural Branching each use a CPU core and a GPU. So Neural Branching + Neural Diving (Sequential) uses two cores and two GPUs. We also give Tuned SCIP two cores, which it uses by running two independent SCIP solves on the input MIP with different random seeds. The primal, dual, and primal-dual gaps for Tuned SCIP at a given time are then computed by selecting the best primal and dual bounds across the two runs at that time. Since SCIP does not use GPUs, we do not control for that resource when comparing Tuned SCIP and Neural Solvers” [pg. 30-31, 8. Joint Evaluation]) Regarding claim 7, Nair/Macready teaches The system of claim 3, Nair teaches wherein the operations comprise: in response to determining that the at least one diving criterion is not satisfied: determining whether at least one branching criterion is satisfied; and in response determining that at least one branching criterion is satisfied, performing another iteration of the branch-and-bound algorithm to identify another single node of the branch-and-bound search tree for further evaluation for the complete combinatorial solution for the first computational problem. (“We would like an expert policy that solves MIPs by building small branch-and-bound trees, since such an expert is making good branching decisions. Among the many branching policies proposed in the optimization literature (see, e.g., Achterberg et al. (2005)) none provably achieve the smallest trees, but empirically full strong branching (FSB) (Achterberg et al. 2005) tends to use fewer steps than competing approaches. It performs one-step lookahead search by simulating one branching step for all candidate variables and picking the one that provides the largest improvement in the dual bound, as determined by the solution to the linear program at the new node” [pg. 24, 7.1. Expert Policy]) Regarding claim 8, Nair/Macready teaches The system of claim 1, where Macready teaches wherein the quantum computer-based solver is implemented using a quantum annealer. (“As such, quantum annealing may be implemented as a heuristic technique, where low-energy states with energy near that of the ground state may provide approximate solutions to the problem.” [¶0012]) Same motivation to combine the teachings of Nair/Macready as claim 1. Regarding claim 9, Nair/Macready teaches The system of claim 1, where Macready teaches wherein the determining of the optimized solution for the second computational problem by using the quantum computer-based solver comprises: converting the second computational problem to a third computational problem that comprises a quadratic unconstrained binary optimization (QUBO) problem; and submitting the third computational problem to the quantum computer- based solver to generate the optimized solution. (“optimizing the objective for a (t+1).sup.th set of values for the Boolean weights based on the t.sup.th set of values for the dictionary, wherein optimizing the objective for a (t+1).sup.th set of values for the Boolean weights may include mapping the objective to a t.sup.th quadratic unconstrained binary optimization ("QUBO") problem and using a quantum processor to at least approximately minimize the t.sup.th QUBO problem” [¶0018, t+1 would include a third computational problem]) Same motivation to combine the teachings of Nair/Macready as claim 1. Regarding claim 10, Nair/Macready teaches The system of claim 1, where Macready teaches wherein the quantum computer-based solver is implemented using a quantum annealer. (“As such, quantum annealing may be implemented as a heuristic technique, where low-energy states with energy near that of the ground state may provide approximate solutions to the problem.” [¶0012]) Same motivation to combine the teachings of Nair/Macready as claim 1. Regarding claim 11, Nair/Macready teaches The system of claim 1, where Macready teaches wherein the quantum computer-based solver is implemented using a universal gate quantum computer. (“For example, an objective that is normally minimized in compressed sensing techniques is re-cast as a quadratic unconstrained binary optimization ("QUBO") problem that is well-suited to be solved using a quantum processor, such as an adiabatic quantum processor and/or a processor designed to implement quantum annealing.” [¶0032; adiabatic quantum processor corresponds to a universal gate quantum computer]) Same motivation to combine the teachings of Nair/Macready as claim 1. Regarding claim 12, Nair/Macready teaches The system of claim 1, where Nair teaches wherein the first trained machine learning model comprises a trained neural network. (“Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP” [Abstract]) Regarding claim 13, Nair/Macready teaches The system of claim 12, where Nair teaches wherein the trained neural network is trained by performing another branch-and-bound algorithm that uses full strong branching. (“Neural Branching learns a deep neural network to make variable selection decisions in branch-and-bound to bound the objective value gap with a small tree. This is done by imitating a new variant of Full Strong Branching we propose that scales to large instances using GPUs” [Abstract]) Regarding claim 14, Nair/Macready teaches The system of claim 1, where Nair teaches wherein the second trained machine learning model comprises a trained neural network. (“Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP” [Abstract]) Regarding claim 15, Nair/Macready teaches The system of claim 1, where Nair teaches wherein the second trained machine learning model comprises a generative model. (“The idea is to train a generative model over assignments to a MIP’s integer variables from which partial assignments can be sampled.” [pg. 14, §6. Neural Diving]) Regarding claim 16, Nair/Macready teaches The system of claim 1, where Nair teaches wherein the second trained machine learning model is trained using training data that comprises a plurality of feasible solutions for combinatorial optimization problems considered during training. (“The model is trained to give higher probability to feasible assignments that have better objective values, using training examples collected offline with an off-the-shelf solver. It learns on all available feasible assignments instead of only the optimal ones, and does not necessarily require optimal assignments, which can be expensive to collect” [pg. 3, Neural Diving]) Regarding claim 17, Nair/Macready teaches The system of claim 1, where Nair teaches wherein the second trained machine learning model is trained using training data that comprises a plurality of feasible solutions for combinatorial optimization problems considered during training. (“The model is trained to give higher probability to feasible assignments that have better objective values, using training examples collected offline with an off-the-shelf solver. It learns on all available feasible assignments instead of only the optimal ones, and does not necessarily require optimal assignments, which can be expensive to collect” [pg. 3, Neural Diving]) Claim 18 recites features similar to claim 1 and is rejected for at least the same reasons therein. Claim 18 additionally requires A non-transitory computer-readable medium comprising instructions that, when executed by one or more digital hardware processors of a computing device, cause the computing device to perform operations… (Nair, “When using such networks with a high-dimensional edge embedding at every layer, their memory usage can become much higher than that of GCNs, which only need the adjacency matrix, and may not fit the GPU memory unless the number of layers is reduced at the cost of accuracy. GCNs are a better fit for our goal of scaling to large-scale MIPs.” [pg. 10, 3.3., ¶2]) Regarding claim 19, it is substantially similar to claim 9 respectively, and is rejected in the same manner, the same art, and reasoning applying Regarding claim 20, it is substantially similar to claims 1 and 18 respectively, and is rejected in the same manner, the same art, and reasoning applying Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL H HOANG whose telephone number is (571)272-8491. The examiner can normally be reached Mon-Fri 8:30AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL H HOANG/ Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Feb 09, 2023
Application Filed
Dec 16, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12518156
Training a Neural Network using Graph-Based Temporal Classification
2y 5m to grant Granted Jan 06, 2026
Patent 12468934
SYSTEMS AND METHODS FOR GENERATING DYNAMIC CONVERSATIONAL RESPONSES USING DEEP CONDITIONAL LEARNING
2y 5m to grant Granted Nov 11, 2025
Patent 12456115
METHODS, ARCHITECTURES AND SYSTEMS FOR PROGRAM DEFINED SYSTEMS
2y 5m to grant Granted Oct 28, 2025
Patent 12437211
System and Method for Predicting Fine-Grained Adversarial Multi-Agent Motion
2y 5m to grant Granted Oct 07, 2025
Patent 12430543
Structured Sparsity Guided Training In An Artificial Neural Network
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
77%
With Interview (+25.9%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month