Prosecution Insights
Last updated: April 19, 2026
Application No. 17/886,339

AI-ASSISTED LINEAR PROGRAMMING SOLVER METHODS, SYSTEMS, AND MEDIA

Non-Final OA §101§103
Filed
Aug 11, 2022
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Canada Co. Ltd.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 20 objected to because of the following informalities: Claim 20 recites “obtain an linear programming (LP) problem definition,” which is grammatically incorrect. The article “an” should be replaced with “a.” This is a correctable matter of form. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-20 are within the four statutory categories (a process, machine, manufacture or composition of matter). Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Claims 1-19 are directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Claim 20 is directed to storage mediums which are machines. Regarding claim 1, the following claim elements are abstract ideas: to perform a pricing step of a simplex algorithm with respect to LP problems of the predetermined type (This is an abstract idea of a mental process and a mathematical concept. The pricing step involves scoring and selecting variables based on numerical values in order to improve an objective function, which is a mathematical calculation. A person could compare variable scores and choose which value to use using observation and judgement, which can be performed in the human mind or with basic tools such as pen and paper or a calculator. See MPEP 2106.04(a)(2)(I) and 2106.04(a)(2)(III).); and solving the LP problem by: generating an initial basis comprising a subset of the plurality of variables, the initial basis being designated as a current basis (This is an abstract idea of a mental process and a mathematical concept. It involves selecting a subset of numerical variables to form an initial starting set for a mathematical optimization. A person could choose which numbers to start with by reviewing the variables and selecting a group using judgement or simple calculations, which can be performed in the human mind or with basic tools.); performing one or more iterations of the simplex algorithm on the current basis (This is an abstract idea of a mental process and a mathematical concept. The simplex algorithm is a mathematical procedure for repeatedly calculating and updating numerical values to improve a linear objective function. A person could carry out these iterative calculations and comparisons step-by-step using arithmetic and logical rules with pen and paper or a calculator.), applying the simplex algorithm to the current basis to generate a set of values for the plurality of variables (This is an abstract idea of a “mathematical concept” and a “mental process.” It involves calculating numerical values for variables using a mathematical algorithm. A person could perform these calculations and determine the resulting values in the human mind or with basic tools such as pen and paper or a calculator.); generating a value of the objective function based on the set of values for the plurality of variables (This is an abstract idea of a “mental process.” It involves computing a numerical result by applying a formula to a set of numbers. A person could perform this calculation in the human mind or with basic tools such as pen and paper or a calculator.); and performing the pricing step of the simplex algorithm by processing the category data, using the pricing model, to generate an updated basis comprising a subset of the plurality of variables, the updated basis being designated as the current basis (This is an abstract idea of a mental process. It involves evaluating categorized numerical data using a mathematical model and selecting a new subset of values to use in the next calculation. A person could review the category of information, compare scores, and decide which values to select using observation and judgement in the human mind or with basic tools.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: obtaining a linear programming (LP) problem definition defining a LP problem of a predetermined type, comprising: variable data specifying a plurality of variables; objective function data specifying an objective function of the plurality of variables; constraint data specifying a plurality of constraints, each constraint constraining a value of at least one of the plurality of variables; and category data specifying, for each variable of the plurality of variables, a respective category (The step of “obtaining” the LP problem definition is merely a generic data gathering operation, which has been recognized as a well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II)(i).) Further, reciting what the obtained data consists of (variables, objective functions, constraints, and categories) merely describes the content of the data and amounts to insignificant extra-solution activity that does not meaningfully limit the judicial exception.); Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following abstract ideas: selecting one or more variables to be removed from the current basis to generate the updated basis (This is an abstract idea of a “mental process.” The limitation recites selecting which values to remove from a current set to form an updated set. This type of selection can be practically performed in the human mind using observation and judgement. For example, a person could review a list of variables, decide which ones should be removed based on their relative desirability, and identify those variables to form the updated set. Since it involves analysis and selection that can be carried out in the human mind, it falls within the mental process grouping of abstract ideas.). Regarding claim 3, the rejection of claim 2 is incorporated herein. Further, claim 3 recites the following abstract ideas: using the pricing model to identify a category to pivot out; and selecting the one or more variables from the identified category (This is an abstract idea of a mental process. The limitation recites identifying a category based on numerical or logical criteria and then selecting values from that category. This type of categorization and selection can be practically performed in the human mind by reviewing the categories and deciding which group and which values should be chosen.). Regarding claim 4, the rejection of claim 2 is incorporated herein. Further, claim 4 recites the following abstract ideas: for each variable of the current basis, a price score based at least in part on the category of the variable; and processing the price scores to select the one or more variables. (This is an abstract idea of a mental process. The limitation recites assigning numerical scores to values based on their categories and comparing those scores to decide which values should be selected. This type of scoring, comparison, and selection can be practically performed in the human mind through observation, reasoning, and judgement by reviewing the categories, assigning scores, and choosing the highest or lowest scoring values, using basic tools such as pen and paper.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: the pricing model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) Regarding claim 5, the rejection of claim 1 is incorporated herein. Further, claim 5 recites the following abstract ideas: select one or more variables of the plurality of variables for removal from the current basis; and select one or more variables of the plurality of variables for addition to the current basis (This is an abstract idea of a mental process. The limitation recites choosing which values should be removed from a current set and which values should be added to form a new set. This type of selection can be performed in the human mind through observation, reasoning, and judgement by reviewing the values and deciding which ones should be removed and which ones should be included.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: processing the category data, using the pricing model, to (This limitation merely recites an instruction to apply the abstract idea to previously obtained data and does not provide any meaningful limitation. It simply directs generic processing of category data in conjunction with the judicial exception, which constitutes insignificant extra-solution activity.): Regarding claim 6, the rejection of claim 5 is incorporated herein. Further, claim 6 recites the following abstract ideas: to generate, for each variable of the current basis, a price score based at least in part on the category of the variable (This is an abstract idea of a mental process. The limitation recites assigning numerical scores to values based on their classification. This type of scoring can be practically performed in the human mind through observation, reasoning, and judgement by reviewing the categories and assigning a score to each variable.); and select the one or more variables for removal from the current basis and select the one or more variables for addition to the current basis (This is an abstract idea of a mental process. The limitation recites choosing which values should be removed from the current set and which values should be added to form an updated set. This type of decision-making can be practically performed in the human mind through observation, reasoning, and judgement by reviewing the values and deciding which ones should be removed and which ones should be included.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: using the pricing model to generate (This is mere instructions to apply the abstract idea using a generic model and does not add any meaningful limitation. It simply directs that the abstract idea be carried out with a pricing model, which constitutes an instruction to apply the judicial exception.) processing the price scores to (This limitation merely recites applying the abstract idea to previously generated numerical values and does not add any meaningful limitation.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 1 recites the following abstract ideas: generating the initial basis as a custom basis based on a plurality of known optimal bases of LP problems of the predetermined type (This is an abstract idea of a mental process. The limitation recites reviewing prior solutions and selecting values for an initial set based on patterns observed in those prior solutions. This type of selection and comparison can be practically performed in the human mind through observation, reasoning, and judgement by examining previous results and choosing values that are likely to be effective.). Regarding claim 8, the rejection of claim 7 is incorporated herein. Further, claim 8 recites the following abstract ideas: selecting the variables of the subset based on a statistical distribution among the plurality of categories of variables of the plurality of known optimal bases (This is an abstract idea of a mental process and a mathematical concept. The limitation recites using statistical calculations to determine how many values to select from each category and selecting those values accordingly. This type of statistical analysis and selection can be practically performed in the human mind through observation, reasoning, and judgement by analyzing prior distributions and choosing values based on those distributions.). Regarding claim 9, the rejection of claim 1 is incorporated herein. Further, claim 9 recites the following abstract ideas: processing the LP problem definition…to generate the initial basis (This is an abstract idea of a mental process and a mathematical concept. The limitation recites analyzing mathematical relationships defined by variables, objective functions, and constraints of a linear programming problem to determine an initial set of values. This involves mathematical calculations and comparisons that can be performed in the human mind or with basic tools such as pen and paper or a calculator.) The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: generating the initial basis as a custom basis by: obtaining a custom basis generation model, trained using machine learning to generate a custom basis for a LP problem of the predetermined type (This step of “obtaining” a custom based generation model is merely a generic data gathering operation, which has been recognized as a well-understood, routine, and conventional activity. See MPEP 21006.05(d)(II)(i). Further, reciting that the obtained model is “trained using machine learning” merely describes the type of tool used to apply the abstract idea and does not add any meaningful limitation. This amounts to an instruction to apply the judicial exception using a generic machine learning model and constitutes insignificant extra-solution activity.); and using the custom basis generation model (This is mere instructions to apply the abstract idea using a generic model and does not add any meaningful limitation.) Regarding claim 10, the rejection of claim 9 is incorporated herein. Further, claim 10 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: obtaining custom basis training data comprising a plurality of data samples, each data sample comprising: constraint data for a respective LP problem of the predetermined type; and an optimal basis for the respective LP problem; and training the custom basis generation model using supervised learning by, for each data sample: using the LP problem definition as an input to the custom basis generation model; and using the optimal basis as a training label (The step of “obtaining” training data is merely a generic data gathering operation, which has been recognized as a well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II). Further, reciting what the training data comprises merely describes the content of the data and amounts to insignificant extra-solution activity that does not meaningfully limit the judicial exception. See MPEP 2107.05(g). Additionally, “training…using supervised learning” by providing outputs and labels merely instructs applying the abstract idea using a generic machine learning technique and does not add any meaningful limitation. Using an LP problem definition as input and an optimal basis as a training label constitutes an instruction to apply the judicial exception. See MPEP 2106.05(f).). Regarding claim 11, the rejection of claim 1 is incorporated herein. Further, claim 11 recites the following abstract ideas: to perform the pricing step of the simplex algorithm in solving a plurality of LP problems of the predetermined type (This is an abstract idea of a mental process and a mathematical concept. The limitation recites performing mathematical optimization step that involves numerical values and selecting variables according to a mathematical algorithm. Such calculations and selections can be carried out in the human mind or with basic tools such as pen and paper or a calculator.); and using a reward function based at least in part on a number of iterations of the simplex algorithm required to solve a given LP problem (This is an abstract idea of a mental process and a mathematical concept. The limitation recites counting iterations and applying a numerical function to evaluate performance. This involves mathematical calculations and comparisons that can be performed in the human mind or with basic tools such as pen and paper.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: using the pricing model (This is mere instructions to apply the abstract idea using a generic model and does not add any meaningful limitation.) Regarding claim 12, the rejection of claim 11 is incorporated herein. Further, claim 12 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the reward function is also based at least in part on the objective function (This limitation merely recites using an additional mathematical result to evaluate performance and does not meaningfully limit the judicial exception. It represents an insignificant extra-solution activity that appends further mathematical evaluation to the abstract idea of model training.). Regarding claim 13, the rejection of claim 1 is incorporated herein. Further, claim 13 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: obtaining pricing training data comprising a plurality of data samples, each data sample comprising: a LP problem definition for a respective LP problem of the predetermined type; and a current basis of the respective LP problem; and training the pricing model using supervised learning by, for each data sample: using the LP problem definition and current basis as inputs to the pricing model; and using a training label comprising an estimated optimal updated basis (The step of “obtaining” pricing training data is merely a generic data gathering operation, which has been recognized as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II). Further, reciting what the training data comprises merely describes the content of the data and amounts to insignificant extra-solution activity that does not meaningfully limit the judicial exception. See MPEP 2106.05(g). Additionally, training the pricing model using supervised learning by providing inputs and labels merely instructs applying the abstract idea using generic machine learning techniques and does not add any meaningful limitation. Using the LP problem definition and current basis as inputs and an estimated optimal updated basis as a training label constitutes an instruction to apply the judicial exception. See MPEP 2106.05(f).). Regarding claim 14, the rejection of claim 13 is incorporated herein. Further, claim 14 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the estimated optimal updated basis is based on an expert opinion (This limitation merely specifies the source of the training label and does not meaningfully limit the judicial exception. Identifying that a label is based on an expert opinion constitutes insignificant extra-solution activity and does not add a technical feature or improve computer functionality. See MPEP 2106.05(g).). Regarding claim 15, the rejection of claim 13 is incorporated herein. Further, claim 15 recites the following abstract ideas: the estimated optimal updated basis is generated using a simplex pricing heuristic constrained by a known optimal basis for the respective LP problem (The limitation is direct to an abstract idea. Applying a simplex pricing heuristic involves mathematical calculations and optimization techniques, which constitute a mathematical concept that can be performed in the human mind or with basic tools such as pen and paper or a calculator. Further, constraining the heuristic using a known optimal basis merely describes post-solution evaluation and label derivation, which amounts to insignificant extra-solution activity that does not meaningfully limit the judicial exception.). Regarding claim 16, the rejection of claim 1 is incorporated herein. Further, claim 16 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the category of a given variable is based on a respective source of the variable (This limitation merely specifies an attribute of previously obtained data, namely that variables are associated with categories according to their source. Describing how data is labeled or organized does not meaningfully limit the judicial exception and amounts to insignificant extra-solution activity.). Regarding claim 17, the rejection of claim 1 is incorporated herein. Further, claim 17 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the category of a given variable is based on one or more constraints of the constraint data pertaining to the variable (This limitation merely specifies an attribute of previously obtained data, namely the variables are associated with categories according to related constrained information. Describing how data is labeled or organized based on an existing constraint does not meaningfully limit the judicial exception and amounts to insignificant extra-solution activity.). Regarding claim 18, the rejection of claim 1 is incorporated herein. Further, claim recites the following abstract ideas: determining that an optimization condition has been satisfied (This is an abstract idea of a mental process. The limitation recites evaluating a condition and deciding whether a stopping criterion has been met. This type of evaluation can be practically performed in the human mind through observation and judgement by reviewing results and determining whether further improvement is possible.); and The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: outputting an optimal solution to the LP problem, comprising an optimal set of values for the plurality of variables corresponding to an optimal value of the objective function (This limitation merely recites outputting or presenting results of the abstract idea after the optimization has been performed. Outputting calculated results constitutes insignificant post-solution activity and does not meaningfully limit the judicial exception.). Regarding claim 19, the rejection of claim 2 is incorporated herein. Further, claim 19 recites the following abstract ideas: generating the initial basis as a custom basis based on a plurality of known optimal bases of LP problems of the predetermined type (This is an abstract idea of a mental process. This limitation recites reviewing prior solutions and selection values for an initial set based on mathematical relationships observed in those solutions. This involves comparing numerical patterns and making selection based on those comparisons, which can be practically be performed in the human mind through observation, reasoning, and judgement, or with basic tools such as pen and paper or a calculator.); and determining that an optimization condition has been satisfied (This is an abstract idea of a “mental process.” The limitation recites evaluating a condition and deciding whether a stopping criterion has been met. This type of evaluation can be practically performed in the human mind through observation, reasoning, and judgement by reviewing results and determining whether further improvement is possible.); and The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: outputting an optimal solution to the LP problem, comprising an optimal set of values for the plurality of variables corresponding to an optimal value of the objective function (This limitation merely recites presenting or transmitting the results of the abstract idea after the optimization has been performed. Outputting calculated results constitutes insignificant post-solution activity and does not meaningfully limit the judicial exception.). Regarding claim 20, claim 20 recites method steps similar to those recited in claim 1, implemented in the form of a non-transitory computer-readable medium having instructions tangibly stored thereon that, when executed by a process system of a computing system, cause the computing system to perform the recited steps. Accordingly, the same subject matter analysis applied to claim 1, as described above, is equally applicable to claim 20, and claim 20 is rejected for similar reasons. The recited non-transitory computer-readable medium, computing system, and processing system merely constitute generic computer components for carrying out the recited method steps and do not amount to anything significantly more than the judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-13 and 15-20 are rejected under the 35 U.S.C. 103 as being unpatentable over Huang et al., (NPL: “Simplex Initialization: A Survey of Techniques and Trends” (Published: 2021)). in view of Anonymous authors (NPL: “DeepSimplex: Reinforcement Learning Of Pivot Rules Improves the Efficiency (Published: 2020)). Regarding claim 1, Huang discloses: A method comprising: obtaining a linear programming (LP) problem definition defining a LP problem of a predetermined type, comprising (Huang, [page 3, section 2.2.2] “Given a general LP problem, it can be formulated into the primal/standard form as: min ⁡ c T x ; s . t .     A x = b ; x ≥ 0 , where c…are problem-dependent parameters, and x Є R n   is the decision variable.” – Under the broadest reasonable interpretation, a linear programming problem definition includes the mathematical specification of the decision variables, objective function, and constraints that define the optimization problem. Huang discloses formulating a general linear programming problem into a standard form that defines an objective function, constraint equations, and decision variables, which together constitute a LP problem definition of a predetermined type suitable for simple-based solution methods.) variable data specifying a plurality of variables (Huang, [section 2.1.1] “x Є R n   is the decision variable.” – discloses a decision variable vector x belonging to an n-dimensional space, which inherently represents a plurality of variables. Each component of vector x corresponds to a respective variable whose value is determined during solution of the linear programming problem.); objective function data specifying an objective function of the plurality of variables (Huang, [section, 2.1.1] “ min ⁡ c T x ” – under BRI, objective function data defining a mathematical function to be optimized with respect to the plurality of variables . Hung discloses an objective function in the form of min ⁡ c T x , which defines an optimization objective based on the decision variable vector x. Because the objective function is expressed as a function of the plurality of variables contained in x, Huang discloses objective function data specifying an objective function of the plurality of variables.) constraint data specifying a plurality of constraints, each constraint constraining a value of at least one of the plurality of variables (Huang, [section 2.1.1] “; s . t .     A x = b ; x ≥ 0 ” – under BRI, constraint data includes data defining mathematical restrictions that limit permissible values of variables in a linear programming problem. Huang discloses equality constraints A x   =   b and bound constraints x ≥ 0 , which together constitute a plurality of constraints. Each constraint restricts the value of at least one component of the decision variable vector x, thereby specifying constraint data as claimed.); and category data specifying, for each variable of the plurality of variables, a respective category (Huang, [section 5.1.2] “Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable, i.e., the label is “1” if it is a basic variable, otherwise “0”.” – under BRI, category data includes data that classifies variables into different groups. Huang discloses assigning a label to each variable indicating whether the variable is a basic variable or a non-basic variable. These labels represent respective categories associated with each variable, thereby specifying category data for each variable of the plurality of variables.); solving the LP problem by: generating an initial basis comprising a subset of the plurality of variables, the initial basis being designated as a current basis (Huang, [section 3.1.1] “The initialization methods in different primal simplex algorithms can be classified into three types. The first type is to generate an initial point or basis. The second is to obtain an improved point or basis based on a given point or basis. Then the improved one is utilized as the starting point of the following steps. The third type is to accelerate the calculation process of the first two types. In the following subsections, methods belonging to these three types will be investigated, respectively… Though the two-phase method can guarantee a feasible basic solution or evidence of infeasibility at Phase I, it introduces extra artificial variables, thus increasing the dimension, as well as the complexity of the problem.” -under BRI, a basis is formed from a subset of the plurality of variables. The reference discloses generating an initial basis and using the generated basis as the starting point of the following steps, which corresponds to designating the initial basis as the current basis.); and performing one or more iterations of the simplex algorithm on the current basis, each iteration comprising (Huang, [section 3.3.1] “The quick simplex method can also be implemented to accelerate the pivoting of Phase II in the basic simplex method, or other simplex initialization methods with a similar iterative process.” – under BRI, “performing one or more iterations” refers to repeatedly executing steps of the simplex algorithm on a current basis. The reference expressly discloses an iterative process of the simplex method, including pivoting operations performed in Phase II of the basic simplex method, which inherently operate on the current basis during each iteration.): applying the simplex algorithm to the current basis to generate a set of values for the plurality of variables (Huang, [section 4.2.1] “Form an auxiliary problem with respect to the current basis and compute x - B = - A B - 1 ∑ j ∈ J A … If x - B ≥ 0 … Apply dual simplex to compute the optimum of the original problem.” – under BRI, applying the simplex algorithm including performing iterations of a primal or dual simplex method to update a basis. The reference discloses forming an auxiliary problem with respect to the current basis and applying one iteration of the modified dual simplex method, followed by applying dual simplex to compute the optimum. Computing x - B and the resulting optimum inherently generates values for the decision variables corresponding to the basis, thereby generating a set of values for the plurality of variables.); generating a value of the objective function based on the set of values for the plurality of variables (Huang, [section 3.2.2] “Divide these k selected variables into two sets based on whether a change in the variable will result in an increase or decrease in the objective function.” – under BRI, generating a value of the objective function includes evaluating how the objective function responds to particular values of the decision variables. The reference discloses determining whether changes in selected variables result in an increase or decrease in the objective function, which necessarily requires evaluating the objective function based on the values of the plurality of variables.); and However, Huang does not teach but Huang in view of DeepSimplex teaches the following limitations: obtaining a pricing model trained, using machine learning, to perform a pricing step of a simplex algorithm with respect to LP problems of the predetermined type (DeepSimplex [section 4], “In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value which decreases as expected weighted distance rises. Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm. The algorithm continues to choose a pivoting rule in each step until the simplex algorithm reaches an optimal basic feasible solution.” – under BRI, the pricing step of the simplex algorithm involves evaluating reduced costs to determine which variable enters the basis. The reference (DeepSimplex) discloses providing the reduced cost vector to train a neural network and selecting a pivoting rule at each simplex iteration, which determines how the basis is updated. Accordingly, the neural network (trained pricing model) performs the pricing step of the simplex algorithm.); performing the pricing step of the simplex algorithm by processing the category data, using the pricing model, to generate an updated basis comprising a subset of the plurality of variables, the updated basis being designated as the current basis (DeepSimplex, [section 4] “Every basic feasible solution of the LP has its own basis matrix B , reduced cost c - , and right-hand side b - . In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value which decreases as expected weighted distance rises. Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm. The algorithm continues to choose a pivoting rule in each step until the simplex algorithm reaches an optimal basic feasible solution.” – discloses performing the pricing step of the simplex algorithm by processing reduced cost information using a trained neural network. The reduced cost vector and objective value are provided as inputs to the neural network, which is used to choose a pivoting rule at each iteration of the simplex algorithm. Selecting the pivoting rule updates the basis matrix B, which comprises a subset of the plurality of variables, and the updated basis is then used in the subsequent iteration as the current basis.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having Huang and DeepSimplex before them, to use a neural network trained using machine learning, as taught by DeepSimplex, to perform the pricing step of the simplex algorithm during the iterative LP solving process described by Huang. DeepSimplex teaches that, during each iteration of phase two of the simplex algorithm, reduced cost information and objective values associated with the current basis are provided to the neural network, and the neural network output is used to select a pivoting rule that determines which variables enter and leave the basis. One would have been motivated to make such a combination in order to use a neural network (AI) to assist the simplex algorithm at the pricing step by guiding pivot selection based on information associated with the current basis, rather than relying solely on predetermined rules. Using a trained neural network for pricing decisions would reduce computational overhead during repeated iterations of the simplex algorithm and improve overall solver efficiency, thereby enabling faster convergence toward an optimal solution. Regarding claim 2, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: selecting one or more variables to be removed from the current basis to generate the updated basis (Huang, [page 8, section 3.1.1] “Select a column   j with ( A B - 1 A N ) i j ≠ 0 . Perform the pivoting with x j as the entering variable and the basic artificial variable in the i -th row as the leaving variable.” – under BRI, a variable “removed from the current basis” corresponds to the leaving variable identified during the pivot operation of the simplex algorithm. The reference discloses selecting a specific basic variable as the leaving variable during pivoting, which removes that variable from the current basis and generates an updated basis from the subsequent iteration.). Regarding claim 3, Huang in view of DeepSimplex teaches all the elements of claim 2, therefore is rejected for the same reasons as those presented for claim 2. Huang in view of DeepSimplex further teaches: using the pricing model to identify a category to pivot out; and selecting the one or more variables from the identified category (Huang, [section 5.1.2] “Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable, i.e., the label is “1” if it is a basic variable, otherwise “0”…The trained neural network can be used to select basic variables for LPs.” DeepSimplex [section 4] “Every basic feasible solution of the LP has its own basis matrix B, reduced cost c - , and right-hand side b - . In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value which decreases as expected weighted distance rises. Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm.” – under BRI, Huang provides explicit categories in form of labels assigned to variables and teaches selecting variables based on those labels. DeepSimplex provides the pricing model used during simplex iterations to guide pivoting decisions. Together, the disclose using a trained neural-network-based pricing model to identify a variable category (label) relevant to pivoting and selecting variables from that identified category during the simplex algorithm.). Regarding claim 4, Huang in view of DeepSimplex teaches all the elements of claim 2, therefore is rejected for the same reasons as those presented for claim 2. Huang in view of DeepSimplex further teaches: using the pricing model to generate, for each variable of the current basis, a price score based at least in part on the category of the variable; and processing the price scores to select the one or more variables (Huang, [section 5.1.2] “Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable…The trained neural network can be used to select basic variables for LPs.” DeepSimplex, [section 4] “In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value which decreases as expected weighted distance rises. Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm.” – under BRI, a “price score” corresponds to a numerical value generated by a pricing model to evaluate variables from selection during simplex algorithm. DeepSimplex discloses using a neural network to estimate Q-values from reduced cost information associated with variables in each simplex iteration. Huang further discloses that variables are associated with labels indicating their basis classification and that a trained neural network is used to select variables for linear programs. Together the references teach generating numerical evaluation values for variables based on their associated category information and processing those values to select one or more variables for updating the basis.). Regarding claim 5, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: processing the category data (Huang, [section 5.1.2] “Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable.”), using the pricing model (DeepSimplex, [section 4] “In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value which decreases as expected weighted distance rises. Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm.”), to: select one or more variables of the plurality of variables for removal from the current basis (Huang, [section 3.1.1] “Perform the pivoting with x j as the entering variable and the basic artificial variable in the i -th row as the leaving variable.” – under BRI, a variable “removed from the current basis” corresponds to the leaving variable selected during the pivot operation of the simplex algorithm. The reference discloses selecting a basic variable as the leaving variable during pivoting, which removes that variable form the current basis to form an updated basis.); and select one or more variables of the plurality of variables for addition to the current basis (Huang, [section 3.1.1] “Perform the pivoting with x j as the entering variable and the basic artificial variable in the i -th row as the leaving variable.” – under BRI, selecting a variable “for addition to the current basis” corresponds to selecting an entering variable during a pivot operation of the simplex algorithm. The reference discloses performing pivoting with x j as the entering variable, which adds x j into the current basis to generate an updated basis for the next iteration.). Regarding claim 6, Huang in view of DeepSimplex teaches all the elements of claim 5, therefore is rejected for the same reasons as those presented for claim 5. Huang in view of DeepSimplex further teaches: using the pricing model to generate, for each variable of the current basis, a price score based at least in part on the category of the variable (DeepSimplex [section 4] ““In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value which decreases as expected weighted distance rises. Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm.” – under BRI, a “price score” corresponds to a numerical value generated by a pricing model to evaluate variables for selection during the simplex algorithm. DeepSimplex discloses using a neural network to estimate Q-values from reduced cost information associated with variables in each simplex iteration. These Q-values are processed to guide pivoting decisions, which determine which variables are selected for removal and addition based on numerical scores generated by a pricing model.); and processing the price scores to select the one or more variables for removal from the current basis and select the one or more variables for addition to the current basis (DeepSimplex [section 4] “Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm. The algorithm continues to choose a pivoting rule in each step until the simplex algorithm reaches an optimal basic feasible solution.” Huang, [section 3.1.1] “Perform the pivoting with x j as the entering variable and the basic artificial variable in the i -th row as the leaving variable.”- under BRI, processing price scores to make a pivoting decision constitutes selecting variables for basis update. DeepSimplex discloses processing pricing outputs (Q-value estimations) to choose a pivoting behavior. Accordingly, processing the pricing model outputs to determine pivoting selects one or more variables for removal from the current basis and selects one or more variables for addition to the current basis.). Regarding claim 7, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: generating the initial basis as a custom basis based on a plurality of known optimal bases of LP problems of the predetermined type (Huang, [page 7, section 2.4] “By starting from a point (a vertex for the simplex methods and an interior point for IPMs) yielded from the solving process of the original problem, it is expected that fewer steps/iterations are usually required to solve the new modified problem since the obtained start point could be very close to an optimal point. This strategy is called “warm start”.” – under the broadest reasonable interpretation, an “initial basis” includes a starting solution used to initialize execution of the simplex algorithm. The reference expressly discloses starting a new linear programming problem using an optimal vertex yielded from the solving process of the original problem. An optimal vertex corresponds to an optimal basis of a previously solved LP problem. Using such optimal vertices obtained from prior LP problems to initialize subsequent problems constitutes generating a custom initial basis based on known optimal basis of LP problems having similar structure, i.e., LP problems of a predetermined type.). Regarding claim 9, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: generating the initial basis as a custom basis by: obtaining a custom basis generation model, trained using machine learning to generate a custom basis for a LP problem of the predetermined type; and processing the LP problem definition, using the custom basis generation model, to generate the initial basis (Huang, [section 5.1.2] “The main purpose of this subsection is to provide a classification mechanism based on a deep neural network, which can divide variables into basic variables and non-basic variables. The input of the neural network is the feature of each variable node, which can be obtained through graph embedding. The output is the probability that the corresponding variable should be selected as a basic variable.” – Huang teaches using a deep neural network, which is a machine learning model, to classify variables as basic or non-basic by processing features derived from the linear programming problem through graph embedding. Selecting which variables are designated as basic variables corresponds to generating an initial basis under the broadest reasonable interpretation. Because the basis is generated by a trained mode based on characteristics of the LP problem rather than the default basis, the resulting basis is a custom basis. Accordingly, Huang teaches generating an initial basis by processing the LP problem definition using a machine learning model, as recited.). Regarding claim 10, Huang in view of DeepSimplex teaches all the elements of claim 9, therefore is rejected for the same reasons as those presented for claim 9. Huang in view of DeepSimplex further teaches: obtaining custom basis training data comprising a plurality of data samples, each data sample comprising: constraint data for a respective LP problem of the predetermined type; and an optimal basis for the respective LP problem (Huang, [section 5.1.2] “To train such a neural network, enough training data pairs are required. Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable, i.e., the label is “1” if it is a basic variable, otherwise “0”. As we have mentioned before, the features can be obtained by graph embedding. However, how to obtain the labels of variable nodes can be a tricky problem. One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label.” – Huang teaches training the neural network using multiple training data pairs, each corresponding to a solved LP problem. The features are obtained from graph embedding of the LP problem, which reflects constraint data of the LP problem, and the labels are obtained by exactly solving the LP problem such that the variables that are basic at optimality serve as labels. Under BRI, the set of variables that are basic at optimality corresponds to an optimal basis. Accordingly, Huang teaches obtaining a custom basis training data comprising constraint data and an optimal basis for respective LP problems, as recited.); training the custom basis generation model using supervised learning by, for each data sample: using the LP problem definition as an input to the custom basis generation model; and using the optimal basis as a training label (Huang, [section 5.1.2] “To train such a neural network, enough training data pairs are required. Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable, i.e., the label is “1” if it is a basic variable, otherwise “0”. As we have mentioned before, the features can be obtained by graph embedding…One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label.” – teaches supervised learning by using training data pairs comprising inputs and corresponding labels. The features obtained by graph embedding of the LP problem correspond to using the LP problem definition as input to the model, and the labels indicating whether a variable is basic at optimality correspond to using the optimal basis as a training label. Accordingly, Huang teaches training the model using supervised learning by providing LP problem information as input and using the optimal basis as training labels, as recited.). Regarding claim 11, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: wherein the pricing model is trained using reinforcement learning by: using the pricing model to perform the pricing step of the simplex algorithm in solving a plurality of LP problems of the predetermined type; and using a reward function based at least in part on a number of iterations of the simplex algorithm required to solve a given LP problem (DeepSimplex, [section 4] “The focus of this study is learning a pivoting rule for phase two of the simplex algorithm… In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c - and the objective value to a fully connected ReLU neural network to estimate the Q-Value… Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm.” [section 5] “Choice of metric: We minimize the total number of weighted simplex iterations… Reward function: We denote T   as the maximum number of unweighted iterations, which is taken as 28 for our experiments to limit the size of Q-values without cutting off a significant portion of paths to optimal solutions… Then the reward, denoted as R ( s t , a t ) ” – teaches training a fully connected ReLU neural network using reinforcement learning to learn a pivoting rule for phase two of the simplex algorithm. The neural network is used during simplex iterations to estimate Q-values and, based on those estimates, selecting a pivoting rule, which corresponds to performing the pricing step of the simplex algorithm while solving a plurality of LP problems. DeepSimplex further defines a “Reward function” and teaches minimizing the total number of weighted simplex iterations, such that the reward is based at least in part on the number of simplex iterations required to solve a given LP problem. Accordingly, DeepSimplex teaches the claimed reinforcement-learning training and iteration-based reward in the manner recited.). Regarding claim 12, Huang in view of DeepSimplex teaches all the elements of claim 11, therefore is rejected for the same reasons as those presented for claim 11. Huang in view of DeepSimplex further teaches: wherein the reward function is also based at least in part on the objective function (DeepSimplex, [section 5] “We denote T   as the maximum number of unweighted iterations… l ' ( s t ) as the objective value before the action is performed, l ' ( s t , a t ) as the objective after the action is performed, l * as the optimal value. Then the reward, denoted as R ( s t , a t ) , at iteration t is:” – teaches defining the reward using objective values of the linear programming problem, including the objective value before an action is performed and the objective value after the action is performed. Accordingly, the reward function is based at least in part on the objective function, as recited.). Regarding claim 13, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: wherein the pricing model is trained by: obtaining pricing training data comprising a plurality of data samples, each data sample comprising: a LP problem definition for a respective LP problem of the predetermined type; and a current basis of the respective LP problem; and training the pricing model using supervised learning by, for each data sample: using the LP problem definition and current basis as inputs to the pricing model; and using a training label comprising an estimated optimal updated basis (DeepSimplex, [section 4] “Every basic feasible solution of the LP has its own basis matrix B, reduced cost c ̅, and right-hand side b ̅. In each iteration in phase two of the simplex algorithm, we pass the reduced cost vector c ̅ and the objective value to a fully connected ReLU neural network to estimate the Q-Value…Based on the Q-Value estimations, we choose a pivoting rule and iterate the simplex algorithm.” [section 5.1] “For each LP instance, at each iteration of the simplex algorithm, a random choice of action is taken. The tableaux for that LP are stored and sorted into batches of the chosen batch size. Then the neural network is trained on this data set using supervised learning with Q*-values.” Huang, [section 5.1.2] “To train such a neural network, enough training data pairs are required. Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable… One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label.” – DeepSimplex teaches obtaining pricing training data from a plurality of LP instances by storing simplex tableaux at each iteration and using this data to train a neural network. The simplex tableau represents the current basis state of the LP problem, and then the LP instance corresponds to the LP problem definition. DeepSimplex further teaches training the neural network using supervised learning based on these solver states. Huang teaches generating supervised training labels by exactly solving the LP problem and using the resulting optimal basis information, namely whether variables are basic or non-basic at optimality. Under the broadest reasonable interpretation, such labels correspond to an estimated optimal updated basis. Accordingly, the combined teachings of DeepSimplex and Huang discloses training a pricing model using supervised learning based on LP problems, current basis states, and optimal-basis-derived training labels.). It would have been obvious to a person of ordinary skill in the art to apply Huang’s optimal-basis labelling technique to the simplex-iteration training data of DeepSimplex in order to train the neural network using supervised learning, representing a predictable combination of known-solver-based training data with known-label generation techniques. Regarding claim 15, Huang in view of DeepSimplex teaches all the elements of claim 13, therefore is rejected for the same reasons as those presented for claim 13. Huang in view of DeepSimplex further teaches: the estimated optimal updated basis is generated using a simplex pricing heuristic constrained by a known optimal basis for the respective LP problem (DeepSimplex, [Introduction] “here we focus on learning pivot rules for the simplex algorithm for solving LP instances. In particular, we learn new pivoting rule policies that combine existing hand-designed heuristics by training on large data sets of LP relaxations of randomly generated instances of the Traveling Salesman Problem (TSP)… The resultant policy decides when to switch between the two rules based on the LP instance objective value and reduced costs at that time.” Huang, [section 5.1.2] “One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label.” – DeepSimplex teaches generating basis update decisions using simplex pivoting rules, which constitute pricing heuristics based on reduced cost and objective value. Huang teaches obtaining a known optimal basis for an LP problem by exactly solving the LP and identifying variables that are basic at optimality. When basis update decisions are learned or guided using such optimal-basis-derived labels, the resulting estimated update bases are constrained by the known optimal basis. Accordingly, the combined teachings disclose generating an estimated optimal updated basis using a simplex pricing heuristic constrained by a known optimal basis for the respective LP problem.). Regarding claim 16, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: wherein the category of a given variable is based on a respective source of the variable (Huang, [section 5.1.1] “In the graph, one partition has n (variable) nodes, which represent the n   variables to be optimized, and the other has m (constraint) nodes, which represent the   m constraints in the standard form of LP. If a variable appears in a constraint, there will exist an edge between the corresponding variable node and constraint node, and the edge is weighted by the corresponding entries of the matrix A . The objective coefficients { c 1 , … , c n } , the right-hand side of the constraints   { b 1 , … , b m } , and the non-zero entries of the matrix A can be utilized as scalar “features” of the variable nodes, the constraint nodes, and the edges, respectively.”- Huang teaches that each variable is represented as a variable node whose characteristics are derived from specific components of the LP formulation, including objective coefficients and constraint-related data. Under BRI, these components constitute respective sources of the variable. Categorizing variables based on features derived from these distinct sources therefore corresponds to categorizing a variable based on its respective source.). Regarding claim 17, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view of DeepSimplex further teaches: wherein the category of a given variable is based on one or more constraints of the constraint data pertaining to the variable (Huang, [section 5.1.2] “Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable, i.e., the label is “1” if it is a basic variable, otherwise “0”…One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label.” – In linear programming, whether a variable is basic or non-basic is determined by constraint equations defining the LP solution. Huang teaches categorizing variables using labels that identify whether a variable is basic or non-basic at optimality. Under BRI, this categorization is based on how the variable participates in the constraint system of the LP problem.). Regarding claim 18, Huang in view of DeepSimplex teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Huang in view DeepSimplex further teaches: determining that an optimization condition has been satisfied; and outputting an optimal solution to the LP problem, comprising an optimal set of values for the plurality of variables corresponding to an optimal value of the objective function (DeepSimplex, [section 4] “The focus of this study is learning a pivoting rule for phase two of the simplex algorithm, where the algorithm starts from a basic feasible solution and finds a path to an optimal solution by traveling to a neighboring basic feasible solution in each iteration… Every basic feasible solution of the LP has its own basis matrix B, reduced cost c ̅, and right-hand side b ̅…” The algorithm continues to choose a pivoting rule in each step until the simplex algorithm reaches an optimal basic feasible solution.” – DeepSimplex teaches iteratively performing simplex operations until the algorithm reaches an “optimal basic feasible solution,” which corresponds to determining that an optimization condition has been satisfied. An optimal basic feasible solution inherently includes the corresponding matrix and associate variable values, which together define the optimal solution of the LP problem. Under BRI, producing this optimal basic feasible solution constitutes outputting an optimal set of values for the plurality of variables corresponding to an optimal value of the objective function.). Regarding claim 19, Huang in view of DeepSimplex teaches all the elements of claim 2, therefore is rejected for the same reasons as those presented for claim 2. Huang in view of DeepSimplex further teaches: generating the initial basis comprises: generating the initial basis as a custom basis based on a plurality of known optimal bases of LP problems of the predetermined type (Huang, [section 5.1.2] “To train such a neural network, enough training data pairs are required. Each training data pair consists of the feature of a variable node and the label indicating whether its corresponding variable is a basic variable… One approach to obtain the labels is to exactly solve the LP problem, then the type (basic/non-basic) of the variable when reaching the optimality can serve as its label.” – Huang teaches obtaining known optimal bases by exactly solving LP problems and identifying which variables are basic or non-basic at optimality. These optimal bases are collected across multiple LP problems and used as training data to guide the model’s predictions regarding which variables should be basic. Under BRI, generating an initial basis based on learned patterns derived from a plurality of known optimal bases constitutes generating a custom basis based on known optimal bases of LP problems of the predetermined type.); determining that an optimization condition has been satisfied; and outputting an optimal solution to the LP problem, comprising an optimal set of values for the plurality of variables corresponding to an optimal value of the objective function (DeepSimplex, [section 4] “The focus of this study is learning a pivoting rule for phase two of the simplex algorithm, where the algorithm starts from a basic feasible solution and finds a path to an optimal solution by traveling to a neighboring basic feasible solution in each iteration… Every basic feasible solution of the LP has its own basis matrix B, reduced cost c ̅, and right-hand side b ̅…” The algorithm continues to choose a pivoting rule in each step until the simplex algorithm reaches an optimal basic feasible solution.” – DeepSimplex teaches iteratively performing simplex operations until the algorithm reaches an “optimal basic feasible solution,” which corresponds to determining that an optimization condition has been satisfied. An optimal basic feasible solution inherently includes the corresponding matrix and associate variable values, which together define the optimal solution of the LP problem. Under BRI, producing this optimal basic feasible solution constitutes outputting an optimal set of values for the plurality of variables corresponding to an optimal value of the objective function.). Regarding claim 20, the claim recites similar limitations corresponding to the method of claim 1 and is rejected for similar reasons using the same teachings and rationale discussed above with respect to claim 1. With respect to the additional limitation reciting “a non-transitory computer-readable medium having instructions tangibly stored thereon that, when executed by a processing system of a computing system, cause the computing system to,” this limitation is inherent, as the cited references disclose computer-implemented linear programming solvers and machine-learning-assisted optimization techniques executed by a processing system, which necessarily require executable instructions stored on a non-transitory computer-readable medium in order to perform the disclosed operations. Claim 8 and 14 are rejected under the 35 U.S.C. 103 as being unpatentable over Huang et al., (NPL: “Simplex Initialization: A Survey of Techniques and Trends” (Published: 2021)). in view of Anonymous authors (NPL: “DeepSimplex: Reinforcement Learning Of Pivot Rules Improves the Efficiency (Published: 2020)) further in view of Khalil et al., (NPL: “MIP-GNN: A Data-Driven Framework for Guiding Combinatorial Solvers” (Published: June 28, 2022 )). Regarding claim 8, Huang in view of DeepSimplex teaches all the elements of claim 7, therefore is rejected for the same reasons as those presented for claim 7. Huang in view of DeepSimplex does not teach but Huang in view of DeepSimplex further in view of Khalil teaches the following limitation: selecting the variables of the subset based on a statistical distribution among the plurality of categories of variables of the plurality of known optimal bases (Khalil, [pages 10223 and 10224] “the main data collection step is to estimate the variable biases…we must collect a set of high-quality feasible solutions…we let CPLEX spend 60 minutes in total to construct this solution pool for each instance… The variable biases are calculated according to Eq. (2).” – Under BRI, a statistical distribution includes numerical values derived from analyzing a plurality of solutions. The reference explicitly discloses collecting a solution pool of up to 1000 feasible solutions and calculating variable biases from these solutions. Because these biases are computed across many solutions, they represent a statistical distribution describing how variables behave across a plurality of known high-quality solutions. Selecting variables based on these bias values therefore corresponds to selecting a subset of variables based on a statistical distribution among categories of variables derived from known optimal or near-optimal biases.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Huang, DeepSimplex, and Khalil before them, to incorporate the use of statistical information derived from prior solution biases, as taught by Khalil, into the AI-assisted linear programming solver of Huang and DeepSimplex. One would have been motivated to make such a combination in order to improve efficiency of the simplex method by leveraging information learned from a plurality of known optimal or near-optimal solutions when generating a custom initial basis and performing variable selection during the pricing step. This would allow more efficient convergence of the linear programming solver by reducing unnecessary pivot operations and guiding variable selection using historical solution behavior. Regarding claim 14, Huang in view of DeepSimplex teaches all the elements of claim 13, therefore is rejected for the same reasons as those presented for claim 13. Huang in view of DeepSimplex does not teach but Huang in view of DeepSimplex further in view of Khalil teaches: optimal updated basis is based on an expert opinion (Khalil, [page 10225] “as CPLEX has been developed and tuned over three decades by MIP experts, i.e., it can be considered a very sophisticated human-learned solver” – Khalil teaches using solver outputs generated by CPLEX, whose optimization behavior is derived from expert-developed heuristics created by IBM experts. Under the broadest reasonable interpretation, optimization decisions and labels derived from such expert-developed solver behavior correspond to estimates based on expert opinion. Accordingly, Khalil teaches the estimated optimization decisions, including basis-related updates, may be based on expert opinion.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daravanh Phakousonh whose telephone number is (571)272-6324. The examiner can normally be reached Mon - Thurs 7 AM - 5 PM, Every other Friday 7 AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daravanh Phakousonh/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Aug 11, 2022
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month