DETAILED ACTION
This office action is in response to submission of application on 06/28/2023.
Claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/30/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 4, 12, 15, and 17-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 2, 4, 12, 17, and 18 refer to nodes as “non-zero”. Outside the context of an inventory optimization problem, which is not required by the claims, it is unclear what it would mean for a node to be “non-zero”. For examination purposes, a non-zero node will be interpreted as a node which is relevant to a solution of a multi-node problem.
Claim 15 recites the limitation "the historical dataset" in line 1. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, claim 15 will be interpreted as mirroring the content of claim 8, which includes a historical dataset and lacks the issue of antecedent basis.
Claim 19 recites the limitation “the one or more updated solutions” in line 2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, the limitation will be interpreted as referring to the one or more solutions introduced in claim 16.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1:
Step 1: The claim is directed to a method, which falls within the statutory category of a process.
Step 2A Prong 1: The claim is directed to an abstract idea. Specifically, the claim recites:
determining, [by the computing system using the machine learning model], a subset of nodes of the plurality of nodes based at least in part on the respective node features; (Abstract idea – mental process. Determining a subset of nodes based on node features can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the node features on a display, mentally determining, based on the node features, which nodes are likely to be relevant to a solution, and mentally selecting a subset of relevant nodes. The courts have recognized that claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP 2106.04(a)(2)(III).)
calculating, [by the computing system], one or more solutions to the multi-node problem based at least in part on the subset of nodes; (Abstract idea – mental process. Calculating a solution to the multi-node problem based on the subset of nodes can practically be performed in the human mind or with the aid of pen and paper, for example, by mentally determining values for each of the subset of nodes which satisfy the objective. See MPEP 2106.04(a)(2)(III).)
Step 2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination. Specifically, the claim recites the additional elements:
accessing, by a computing system, a multi-node problem comprising a plurality of nodes, each respective node having one or more node features; (Accessing the multi-node problem on a generic computing system amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
providing, by the computing system, each respective node with each respective node feature to a machine learning model; (Providing features to a generic machine learning model is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
determining the subset of nodes by the computing system using the machine learning model (Determining the subset of nodes using the computing system and the machine learning model amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
calculating solutions by the computing system (Calculating solutions to the multi-node problem using the computing system amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory. (Storing the solutions in memory amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g).)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Specifically, the claim recites the additional elements:
accessing, by a computing system, a multi-node problem comprising a plurality of nodes, each respective node having one or more node features; (Accessing the multi-node problem on a generic computing system amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
providing, by the computing system, each respective node with each respective node feature to a machine learning model; (Providing features to a generic machine learning model is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
determining the subset of nodes by the computing system using the machine learning model (Determining the subset of nodes using the computing system and the machine learning model amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
calculating solutions by the computing system (Calculating solutions to the multi-node problem using the computing system amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory. (Storing the solutions in memory amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to storing and retrieving information in memory, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d).)
Claims 2-20:
Claim 2 recites The method of claim 1, wherein the subset of nodes of the plurality of nodes comprises non-zero nodes. This limitation merely specifies that the nodes selected as part of the mental process are non-zero nodes (i.e. nodes that are part of the solution). See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 3 recites The method of claim 1, wherein providing each respective node and each respective node feature further comprises generating an embedded vector comprising one or more dimensions corresponding to each respective node feature. Generating an embedded vector with dimensions corresponding to node features can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the node features on a display and mentally determining the embedded vector to be a vector with as many dimensions as there are node features, with each element of the vector equal to its corresponding node feature. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 4 recites The method of claim 1, wherein determining the subset of nodes further comprises: determining, by the computing system using the machine learning model, a minimum value associated with each of the plurality of nodes; determining, by the computing system using the machine learning model, a probability that the minimum value associated with each of the plurality of nodes is a value associated with each node in an optimal solution; and identifying, by the computing system, the subset of nodes, each node of the subset of nodes identified as non-zero and characterized by a probability greater than or equal to a predetermined threshold. Determining a minimum value and associated optimality probability for each node and then identifying the subset of nodes that are non-zero with probability greater than or equal to a threshold are evaluations and judgements that can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process). See MPEP 2106.04(a)(2)(III). Performing these steps using a generic computing system and a generic machine learning model amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 5 recites The method of claim 1, wherein the machine learning model comprises a graph neural network. Graph neural networks are standard in the field of machine learning, and thus this limitation amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 6 recites The method of claim 1, wherein the multi-node problem represents a multi-echelon inventory optimization problem. The claim merely limits the disclosed method to the field of multi-echelon inventory optimization, and thus amounts to generally linking the use of a judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 7 recites The method of claim 1, wherein calculating the one or more solutions to the multi-node problem utilizes a guaranteed service model. Utilizing a guaranteed service model to calculate solutions amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, guaranteed service models are well-understood, routine, and conventional in the field of multi-node problems, per Eruguz: “Multi-echelon approaches are widely studied in the literature for allocating safety stocks in supply chains facing customer demand uncertainty. There exist two main approaches in the literature, namely, the stochastic-service model (SSM) and the guaranteed-service model (GSM) approaches…” (Eruguz et al., “A comprehensive survey of guaranteed-service models for multi-echelon inventory optimization”, pg. 111, section 1). See MPEP 2106.05(d). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 8 recites The method of claim 1, wherein the machine learning model is trained at least in part on a historical dataset comprising a plurality of solutions to multi-node problems. Training a generic machine learning model using labeled training data is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 9 recites The method of claim 1, wherein the one or more solutions to the multi- node problem are provided to a second computing system. Providing the solutions to a second computing system amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claims 10 and 12-15 are system claims containing substantially the same elements as method claims 1-2 and 6-8, respectively, and are rejected on the same grounds under 35 U.S.C. 101 as claims 1-2 and 6-8, respectively, mutatis mutandis. The additional components of A computing system comprising: one or more processors; and a non-transitory computer readable medium comprising instructions that, when executed by the one or more processors, cause the system to perform operations are interpreted as a general-purpose computer and mere instructions to apply the judicial exception on the computer. Therefore, the claims do not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 11 recites The system of claim 10, wherein the machine learning model includes an embedding module. Generating embeddings using a generic embedding module amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claims 16-20 are product claims containing substantially the same elements as method claims 1-2, 4, 7, and 11, respectively, and are rejected on the same grounds under 35 U.S.C. 101 as claims 1-2, 4, 7, and 11, respectively, mutatis mutandis. The additional components of A non-transitory computer-readable storage medium storing a set of instructions that, when executed by one or more processors of a computer system, cause the computer system to perform operations are interpreted as a general-purpose computer and mere instructions to apply the judicial exception on the computer. Therefore, the claims do not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 9-13, 16-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over
Lauri et al. (hereinafter Lauri), “Learning fine-grained search space pruning and heuristics for combinatorial optimization” (published 05/08/2023) in view of
Saxena et al. (hereinafter Saxena), U.S. Patent Application Publication US-20150254589-A1 (published 09/10/2015).
Regarding Claim 1,
Lauri teaches A method comprising:
accessing, by a computing system, a multi-node problem comprising a plurality of nodes, each respective node having one or more node features; (Pg. 321, section 3: “For ease of exposition, we describe the framework in terms of the MCE [maximum clique enumeration] problem… In our case, we assume the instance is represented as an undirected graph
G
=
(
V
,
E
)
.” Pg. 322, section 4: “We use the following graph-theoretic features… Feature (F4), the LCC [local clustering coefficient] of a vertex is the fraction of its neighbors with which the vertex forms a triangle, encapsulating the well-known small world phenomenon.” Pg. 324, section 5: “All experiments are executed on a machine equipped with an Intel Core i7-4770K CPU (3.5 GHz), 8 GB of RAM, running Ubuntu 16.04.” The computing system accesses a maximum clique enumeration problem (i.e. a multi-node problem) represented as a graph comprised of vertices (i.e. nodes), each associated with features such as local clustering coefficient.)
providing, by the computing system, each respective node with each respective node feature to a machine learning model; (Pg. 321, section 3: “To learn the mapping
γ
from
T
, we use a probabilistic classifier
P
which outputs a probability distribution over
{
0,1
}
for a given
f
(
u
)
for
u
∈
V
.” Each node
u
of the plurality of nodes
V
is provided to probabilistic classifier
P
(i.e. a machine learning model).)
determining, by the computing system using the machine learning model, a subset of nodes of the plurality of nodes based at least in part on the respective node features; and (Pg. 321, section 3: “Define a confidence threshold
q
∈
[
0,1
]
. Delete from
G
each vertex predicted by
P
to not be in a solution with probability at least
q
…” Vertices are deleted from the graph (i.e. a subset of nodes is determined) based on the output of the probabilistic classifier (i.e. machine learning model) on the node features.)
calculating, by the computing system, one or more solutions to the multi-node problem based at least in part on the subset of nodes; (Pg. 321, section 3: “A natural parameterized search strategy, which we call probabilistic preprocessing (or single-stage sparsification), for enhancing a search algorithm
A
by
P
is as follows… Execute
A
with
G
'
as input instead of
G
.” Search algorithm
A
is executed (i.e. one or more solutions are calculated) using the updated graph
G
'
with the nodes removed (i.e. based on the subset of nodes).)
Lauri does not appear to explicitly disclose storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory.
However, Saxena teaches storing, by the computing system, the one or more solutions to the multi-node problem in a computer memory. (0028: “The data 222, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules [210].” 0046-0052: “Following are the steps followed by the calculation module 216 to execute the dynamic programming approach… Store all the solution
S
i
in an array of size
T
.” As can be seen in figure 2, the system 102 (i.e. computing system) includes memory 208, which includes a calculation module 216 configured to determine solutions to a multi-node problem, and data 222 configured to store data generated by the modules including the calculation module (i.e. the solutions are stored in a computer memory).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lauri and Saxena. Lauri teaches simplifying multi-node optimization problems by pruning nodes which are predicted by a machine learning model to be irrelevant to the solution. Saxena teaches a computer system for performing multi-echelon inventory optimization, including storing solutions so that they may be used for management of a supply chain network. One of ordinary skill would have motivation to combine Lauri and Saxena because Lauri’s optimization method “is not restricted to MCE, but can be applied to other problems as well” (Lauri, pg. 321, section 3), and Saxena provides a system for implementing optimization methods to solve the problem of inventory management, which “is directly proportional to profit that any company may have and thus badly affects the profitability of the company” (Saxena, 0003).
Regarding Claim 2, Lauri and Saxena teach The method of claim 1, as shown above.
Lauri also teaches wherein the subset of nodes of the plurality of nodes comprises non-zero nodes. (Pg. 321, section 3: “Define a confidence threshold
q
∈
[
0,1
]
. Delete from
G
each vertex predicted by
P
to not be in a solution with probability at least
q
…” The vertices not deleted from the graph (i.e. the subset of nodes) are vertices predicted to be relevant to a solution (i.e. non-zero nodes).)
Regarding Claim 3, Lauri and Saxena teach The method of claim 1, as shown above.
Lauri also teaches wherein providing each respective node and each respective node feature further comprises generating an embedded vector comprising one or more dimensions corresponding to each respective node feature. (Pg. 321, section 3: “
f
:
V
→
R
d
a mapping from a vertex to a
d
-dimensional feature space… To learn the mapping
γ
from
T
, we use a probabilistic classifier
P
which outputs a probability distribution over
{
0,1
}
for a given
f
(
u
)
for
u
∈
V
.” Each vertex
u
(i.e. node) is mapped into feature space by
f
(
u
)
(i.e. an embedded vector is generated) before being provided to the model.)
Regarding Claim 4, Lauri and Saxena teach The method of claim 1, as shown above.
Lauri also teaches wherein determining the subset of nodes further comprises:
determining, by the computing system using the machine learning model, a minimum value associated with each of the plurality of nodes; (Pg. 321, section 3: “
f
:
V
→
R
d
a mapping from a vertex to a
d
-dimensional feature space. For reasons of scalability, we strive to keep
d
small…” Each vertex (i.e. node) is mapped into low-dimensional feature space (i.e. associated with a minimum value).)
determining, by the computing system using the machine learning model, a probability that the minimum value associated with each of the plurality of nodes is a value associated with each node in an optimal solution; and (Pg. 321, section 3: “To learn the mapping
γ
from
T
, we use a probabilistic classifier
P
which outputs a probability distribution over
{
0,1
}
for a given
f
(
u
)
for
u
∈
V
.” Pg. 334, section 8: “[O]ur proposed framework relies on interpretable learning models with local features to prune the elements that are not in any optimal solution(s).” The output of probabilistic classifier
P
represents a probability that a node is relevant to an optimal solution.)
identifying, by the computing system, the subset of nodes, each node of the subset of nodes identified as non-zero and characterized by a probability greater than or equal to a predetermined threshold. (Pg. 321, section 3: “Define a confidence threshold
q
∈
[
0,1
]
. Delete from
G
each vertex predicted by
P
to not be in a solution with probability at least
q
…” Vertices are deleted from the graph or kept in the graph (i.e. a subset of nodes is determined) based on whether they are predicted to be relevant to the solution (i.e. non-zero) with a probability satisfying confidence threshold
q
.)
Regarding Claim 6, Lauri and Saxena teach The method of claim 1, as shown above.
Saxena also teaches wherein the multi-node problem represents a multi-echelon inventory optimization problem. (0029: “The present disclosure relates to system(s) and method(s) to provide inventory optimization in a multi-echelon supply chain network.”)
Regarding Claim 9, Lauri and Saxena teach The method of claim 1, as shown above.
Saxena also teaches wherein the one or more solutions to the multi-node problem are provided to a second computing system. (0022: “It will be understood that the system 102 may be accessed by multiple users through one or more user devices 104-1, 104-2 . . . 104-N, collectively referred to as user 104 hereinafter, or applications residing on the user devices 104. Examples of the user devices 104 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices 104 are communicatively coupled to the system 102 through a network 106.” The system 102, which includes data 222 storing solutions, can be accessed by a user device such as a portable computer (i.e. a second computing system).)
Claims 10, 12, and 13 are system claims containing substantially the same elements as method claims 1, 2, and 6, respectively. Lauri and Saxena teach the elements of claims 1, 2, and 6, as shown above.
Saxena also teaches A computing system comprising: one or more processors; and a non-transitory computer readable medium comprising instructions that, when executed by the one or more processors, cause the system to perform operations to: (0024: “In one embodiment, the system 102 may include at least one processor 202… Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 208.” 0010: “The present disclosure can also be viewed as providing a non-transitory computer readable medium embodying a program executable in a computing device to provide inventory optimization in a supply chain network.”)
Regarding Claim 11, Lauri and Saxena teach The system of claim 10, as shown above.
Lauri also teaches wherein the machine learning model includes an embedding module. (Pg. 321, section 3: “
f
:
V
→
R
d
a mapping from a vertex to a d-dimensional feature space… To learn the mapping
γ
from
T
, we use a probabilistic classifier
P
which outputs a probability distribution over
{
0,1
}
for a given
f
(
u
)
for
u
∈
V
.”
f
is a mapping from a vertex into feature space (i.e. an embedding module).)
Claims 16-18 and 20 are product claims containing substantially the same elements as method claims 1, 2, and 4, and system claim 11, respectively. Lauri and Saxena teach the elements of claims 1, 2, 4, and 11 as shown above.
Saxena also teaches A non-transitory computer-readable storage medium storing a set of instructions that, when executed by one or more processors of a computer system, cause the computer system to perform operations comprising: (0024: “In one embodiment, the system 102 may include at least one processor 202… Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 208.” 0010: “The present disclosure can also be viewed as providing a non-transitory computer readable medium embodying a program executable in a computing device to provide inventory optimization in a supply chain network.”)
Claims 5, 8, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lauri in view of Saxena, and further in view of
Khalil et al. (hereinafter Khalil), “MIP-GNN: A Data-Driven Framework for Guiding Combinatorial Solvers” (published 05/27/2022).
Regarding Claim 5, Lauri and Saxena teach The method of claim 1, as shown above.
Lauri and Saxena do not appear to explicitly disclose wherein the machine learning model comprises a graph neural network.
However, Khalil teaches wherein the machine learning model comprises a graph neural network. (Pg. 1, col. 2, para. 2: “To guide a solver in finding a solution or certifying optimality faster for a given instance, we perform supervised GNN [graph neural network] training to predict variable biases…” Pg. 4, col. 2, para. 4: “The primary use case for the bias predictions will be to guide node selection in the branch-and-bound algorithm. The node selection strategy will use MIP-GNN predictions to score ‘open nodes’ of the search tree. If the predictions are good, then selecting nodes that are consistent with them should bring us closer to finding a good feasible solution quickly.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lauri, Saxena, and Khalil. Lauri teaches simplifying multi-node optimization problems by pruning nodes which are predicted by a machine learning model to be irrelevant to the solution. Saxena teaches a computer system for performing multi-echelon inventory optimization, including storing solutions so that they may be used for management of a supply chain network. Khalil teaches guiding solvers for multi-node optimization problems by predicting variable biases and selecting relevant nodes using a graph neural network. One of ordinary skill would have motivation to combine Lauri, Saxena, and Khalil because both Lauri and Khalil teach performing machine learning on graph nodes to simplify a multi-node problem before calculating solutions with a solver, and Lauri explicitly suggests using a GNN for this purpose: “We showcase how a relatively lightweight machine learning approach (e.g., gradient boosted trees) for vertex/edge pruning can provide significant improvements in the performance of existing solvers. In fact, deep learning techniques like graph neural networks (GNN) (Scarselli et al. 2009) might also be an alternative in this regard, and could potentially remove the necessity of computing hand-crafted features…” (Lauri, pg. 334, section 6.2).
Regarding Claim 8, Lauri and Saxena teach The method of claim 1, as shown above.
Lauri and Saxena do not appear to explicitly disclose wherein the machine learning model is trained at least in part on a historical dataset comprising a plurality of solutions to multi-node problems.
However, Khalil teaches wherein the machine learning model is trained at least in part on a historical dataset comprising a plurality of solutions to multi-node problems. (Pg. 5, col. 1, para. 3: “This makes our approach most suitable for very challenging combinatorial problems with available historical instances that can be used for training.” Pg. 5, col. 2, para. 3: “Given a BLP training instance, the main data collection step is to estimate the variable biases. To do so, we must collect a set of high-quality feasible solutions. We leverage a very useful feature of modern solvers: the solution pool…” The GNN (i.e. machine learning model) is trained using historical solution data.)
Claim 15 is a system claim containing substantially the same elements as method claim 8. Lauri, Saxena, and Khalil teach the elements of claim 8, as shown above.
Claims 7, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lauri in view of Saxena, and further in view of
Eruguz et al. (hereinafter Eruguz), “A comprehensive survey of guaranteed-service models for multi-echelon inventory optimization” (published 12/01/2015).
Regarding Claim 7, Lauri and Saxena teach The method of claim 1, as shown above.
Lauri and Saxena do not appear to explicitly disclose wherein calculating the one or more solutions to the multi-node problem utilizes a guaranteed service model.
However, Eruguz teaches wherein calculating the one or more solutions to the multi-node problem utilizes a guaranteed service model. (Pg. 111, section 1: “Multi-echelon approaches are widely studied in the literature for allocating safety stocks in supply chains facing customer demand uncertainty. There exist two main approaches in the literature, namely, the stochastic-service model (SSM) and the guaranteed service model (GSM) approaches”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lauri, Saxena, and Eruguz. Lauri teaches simplifying multi-node optimization problems by pruning nodes which are predicted by a machine learning model to be irrelevant to the solution. Saxena teaches a computer system for performing multi-echelon inventory optimization, including storing solutions so that they may be used for management of a supply chain network. Eruguz teaches solving multi-echelon inventory optimization problems using a guaranteed service model (GSM). One of ordinary skill would have motivation to combine Lauri, Saxena, and Eruguz because “[o]ne reason for the wider application of the GSM in industry is its computational efficiency in terms of system optimization. Managers seem also comfortable with the notion of guaranteed-service time despite the eventual safety stock cost increases that stem from this assumption… the GSM approach yields a better performance for moderate costs…” (Eruguz, pg. 113, section 2).
Claim 14 is a system claim containing substantially the same elements as method claim 7. Lauri, Saxena, and Eruguz teach the elements of claim 7, as shown above.
Claim 19 is a product claim containing substantially the same elements as method claim 7. Lauri, Saxena, and Eruguz teach the elements of claim 7, as shown above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN M ROHD whose telephone number is (571)272-6445. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.M.R./Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147