Prosecution Insights
Last updated: April 19, 2026
Application No. 18/144,505

METHOD FOR SOLVING PROBLEM AND SYSTEM THEREOF

Non-Final OA §101§102§103
Filed
May 08, 2023
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics
OA Round
1 (Non-Final)
48%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on May 8, 2023. Claims 1-16 are pending in the case. Claims 1, 15, and 16 are the independent claims. This action is non-final. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental steps) without significantly more. This judicial exception is not integrated into a practical application because any additional elements amount to implementing the abstract idea on a generic computer. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding independent claims 1, 15, and 16, and relying on the evaluation flowchart in MPEP 2106: Step 1 (Is the claim to a process, machine, manufacture, or composition of matter?): Yes. Claim 1 is a method (process). Claim 15 is a system (machine). Claim 16 is a recording medium (article of manufacture). Step 2a Prong One (Does the claim recite an abstract idea?): Yes. Claims 1, 15, and 16 recite: setting at least one current search node on a search tree corresponding to a solution space of a target problem (a mental process of determining that at least one node of a tree is designated as a current node); selecting candidate search nodes from among child nodes of the at least one current search node, a number of the current search nodes being equal to a number of items (a mental process involving determining/selecting a set of candidate search nodes from the child nodes of the node determined to be the current search node, where the number of current search nodes is equal to a number); determining at least one next search node from among the candidate search nodes based on results of search simulation for the candidate search nodes (a mental process of evaluation, such as a human mentally evaluating/determining at least one node to be a next search node based on information, including information resulting from a human mentally simulating search for the candidate search nodes); determining a solution to the target problem based on a result of a search using the at least one next search node (a mental process of evaluation, such as a human mentally determining the solution to the problem based on the node determined to be the next search node (such as by determining a solution which corresponds to that particular node)). Under the broadest reasonable interpretation, these steps may be performed mentally, using mental observation and mental determination, including by a human using a physical aid such as pen and paper, including a human mentally performing observations and mentally performing mathematical calculations, and therefore correspond to the Mental Processes grouping. Step 2a Prong Two (Does the claim recite additional elements that integrate the judicial exception into a practical application?): No. Claims 1, 15, and 16 additionally recite that the number of items are inferred by a machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) with respect to use of a machine-trained model to perform the inference). Claim 15 additionally recites that the system comprises at least one processor; and a memory configured to store program code and a machine-trained model associated with a target problem, the program code comprising: setting code, selecting code, first determining code, and second determining code respectively configured to cause the at least one processor to perform operations comprising the limitations discussed above (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Claim 16 additionally recites that the recording medium is a non-transitory computer-readable recording medium storing program code executable by at least one processor, the program code comprising: setting code, selecting code, first determining code, and second determining code respectively configured to cause the at least one processor to perform operations comprising the limitations discussed above (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components. Step 2b (Does the claim recite additional elements that amount to significantly more than the judicial exception): No. Relying on the same analysis as Step 2a Prong Two (see MPEP 2106.05.I.A: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include:…Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP 2106.05(f));…Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception...; Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g);…)), claims 1 15, and 16 do not recite any additional elements that amount to significantly more than the abstract idea. As discussed above: Claims 1, 15, and 16 additionally recite that the number of items are inferred by a machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) with respect to use of a machine-trained model to perform the inference). Claim 15 additionally recites that the system comprises at least one processor; and a memory configured to store program code and a machine-trained model associated with a target problem, the program code comprising: setting code, selecting code, first determining code, and second determining code respectively configured to cause the at least one processor to perform operations comprising the limitations discussed above (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Claim 16 additionally recites that the recording medium is a non-transitory computer-readable recording medium storing program code executable by at least one processor, the program code comprising: setting code, selecting code, first determining code, and second determining code respectively configured to cause the at least one processor to perform operations comprising the limitations discussed above (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea. Regarding dependent claim 2: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the machine-trained model is configured to perform inferencing in an autoregressive manner (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite wherein the machine-trained model is configured to perform inferencing in an autoregressive manner (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claim 3: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite wherein the setting the at least one current search node comprises setting a plurality of current search nodes, the plurality of current search nodes being on a same level on the search tree (a mental process involving a human mentally evaluating/determining that a plurality of current search nodes on a same level of the tree are to be set/designated). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claim 4: Step 2a Prong One: incorporates the rejection of claim 3; the claims further recite wherein a number of next search nodes is equal to a number of the plurality of current search nodes (a mental process of evaluation, such as a human mentally determining that the number of next search nodes is to be equal to the number of current search nodes). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claim 5: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite a search of a first subtree having the first node as its root node and a search of a second subtree having the second node as its root node (a mental process of evaluation, such as a human mentally searching first and second subtrees corresponding to first and second nodes in a parallel manner). Step 2a Prong Two: the claims further recite wherein the at least one current search node includes a first node and a second node (field of use and technological environment as discussed in MPEP 2106.05(h)); and that the search of the first subtree and search of the second subtree are performed in parallel (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims further recite wherein the at least one current search node includes a first node and a second node (field of use and technological environment as discussed in MPEP 2106.05(h)); and that the search of the first subtree and search of the second subtree are performed in parallel (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claim 6: Step 2a Prong One: incorporates the rejection of claim 1; the claim further recites the at least one current search node includes a first node and a second node that are on a same level on the search tree (a mental process of evaluation, such as a human mentally determining (in the mental step of setting the current search node as discussed with respect to claim 1) that the current search node includes first and second nodes which are on a same level of a tree); and a number of candidate search nodes selected from among child nodes of the first node is equal to a number of candidate search nodes selected from among child nodes of the second node (a mental process of evaluation, such as a human mentally determining (in the mental step of selecting candidate search nodes) to select equal numbers of candidate search nodes corresponding to child nodes of the first node and candidate search nodes corresponding to child nodes of the second node). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claim 7: Step 2a Prong One: incorporates the rejection of claim 1; the claim further recites wherein the selecting the candidate search nodes comprises selecting the candidate search nodes based on confidence scores of items (a mental process of evaluation, such as a human mentally determining to select the candidate search nodes based on confidence scores). Step 2a Prong Two: the claims additionally recite that the confidence scores are acquired as a result of inferencing performed by the machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite that the confidence scores are acquired as a result of inferencing performed by the machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Regarding dependent claim 8: Step 2a Prong One: incorporates the rejection of claim 1. The claim further recites: performing sampling using confidence scores of items acquired as a result of inferencing performed by the machine-trained model (a mental process of evaluation, such as a human mentally performing sampling/selection based on confidence scores of items); and selecting the candidate search nodes based on a result of the sampling (a mental process of evaluation, such as a human mentally determining which candidate search nodes are selected based on the result of the sampling). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claim 9: Step 2a Prong One: incorporates the rejection of claim 1. Step 2a Prong Two: the claims additionally recite wherein the selecting the candidate search nodes comprises selecting the candidate search nodes using another machine-trained model, wherein the another machine-trained model is a model trained to receive information of the child nodes and infer the candidate search nodes based on the received information (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite wherein the selecting the candidate search nodes comprises selecting the candidate search nodes using another machine-trained model, wherein the another machine-trained model is a model trained to receive information of the child nodes and infer the candidate search nodes based on the received information (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claim 10: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite: search simulation for the first candidate search node and search simulation for the second candidate search node are performed in parallel (a mental process of evaluation, such as a human mentally determining (in the mental step of simulating candidate search nodes as discussed with respect to claim 1) to perform the simulation, where the mental simulation may involve mentally simulating the results of searching based on the nodes and determining their probable results, similar to performing a thought experiment, including using physical aids such as pencil and paper). Step 2a Prong Two: the claims further recite the candidate search nodes include a first candidate search node and a second candidate search node (field of use and technological environment as discussed, in MPEP 2106.05(h)); the search simulation is performed in parallel (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims further recite the candidate search nodes include a first candidate search node and a second candidate search node (field of use and technological environment as discussed, in MPEP 2106.05(h)); the search simulation is performed in parallel (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Regarding dependent claim 11: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite wherein the determining the at least one next search node comprises (a mental process of evaluation as discussed with respect to claim 1): deriving predicted paths for the candidate search nodes by performing search simulation, which selects the at least one next search node based on confidence scores of items acquired as a result of inferencing performed by the machine-trained model (a mental process of evaluation, such as a human mentally deriving the predicted paths based on a mental simulation selecting a next node based on a confidence score, where the mental simulation may involve mentally simulating the results of selecting the different paths and determining their probable results, similar to performing a thought experiment); evaluating predicted solutions corresponding to the predicted paths using an evaluation function associated with the target problem (a mental process of evaluation of the predicted solutions, including by using an evaluation function (such as a mathematical function, formula, calculation performed mentally, including with an aid such as pencil and paper) associated with the target problem); determining the at least one next search node from among the candidate search nodes based on results of the evaluating (a mental process of evaluation including a human mentally determining a next search node based on the evaluation). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claim 12: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite wherein the determining the at least one next search node (a mental process of evaluation as discussed with respect to claim 1) comprises: evaluating values of the candidate search nodes via sampling-based search simulation (a mental process of evaluating the values of the node, including based on mental sampling and mental simulation); determining the at least one next search node from among the candidate search nodes based on the evaluated values (a mental process of evaluating, such as a human mentally determining a next search node based on the evaluated values); wherein the evaluating the values of the candidate search nodes (mental process of evaluating as discussed above) comprises: deriving a plurality of predicted paths for a particular candidate search node by repeatedly performing the search simulating using, as sampling probabilities, confidence scores of items (a mental process of evaluation, such as a human mentally deriving the predicted paths based on a repeated mental simulation based on a confidence scores which are sampling probabilities, including mentally utilizing mathematical calculations and with the assistance of physical aids such as pencil and paper); evaluating predicted solutions corresponding to the plurality of predicted paths using an evaluation function associated with the target problem (a mental process of evaluation of the predicted solutions, including by using an evaluation function (such as a mathematical function, formula, calculation performed mentally, including with an aid such as pencil and paper) associated with the target problem); determining a value of the particular candidate search node based on results of the evaluating the predicted solutions (a mental process of evaluation including a human mentally determining a value of a particular node based on the results of evaluation). Step 2a Prong Two: the claims additionally recite that the confidence scores are acquired as a result of inferencing performed by the machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f) and a field of use and technological environment as discussed in MPEP 2106.05(h)). Step 2b: the claims additionally recite that the confidence scores are acquired as a result of inferencing performed by the machine-trained model (mere instructions to apply the exception using generic computer). Regarding dependent claim 13: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite: wherein the at least one next search node includes a first node and a second node (a mental process of evaluation, such as a human mentally determining (in the mental step of determining at least one next search node as discussed with respect to claim 1) that the next search node includes first and second nodes) wherein the determining the solution to the target problem (mental process of evaluation as discussed with respect to claim 1) comprises: deriving a first path and a second path passing through the first node and the second node, respectively, on the search tree (a mental process of evaluation, such as a human mentally deriving the first and second paths passing through first and second nodes in the search tree) evaluating solutions corresponding to the first path and the second path using an evaluation function associated with the target problem (a mental process of evaluation of the solutions corresponding to first and second paths, including by using an evaluation function (such as a mathematical function, formula, calculation performed mentally, including with an aid such as pencil and paper) associated with the target problem) determining the solution to the target problem based on results of the evaluating (a mental process of evaluation including a human mentally determining a solution to the target problem based on the results of evaluation). Step 2a Prong Two: the claims do not recite any other limitations in addition to the abstract idea discussed above. Step 2b: the claims do not recite any other limitations in addition to the abstract idea discussed above. Regarding dependent claim 14: Step 2a Prong One: incorporates the rejection of claim 1; the claims further recite deriving the solution to the target problem again (a mental process of evaluation including a human mentally re-determining a solution (such as double-checking or otherwise reevaluating the solution) to the target problem). Step 2a Prong Two: the claims additionally recite that the confidence scores are acquiring an additionally-trained machine-trained model using the determined solution to the target problem (insignificant extra-solution activity as discussed in MPEP 2106.05(g)) and that the deriving of the solution to the target problem again is performed by using the acquired machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Step 2b: the claims additionally recite that the confidence scores are acquiring an additionally-trained machine-trained model using the determined solution to the target problem (insignificant extra-solution activity as discussed in MPEP 2106.05(g), which can be reevaluated as well-understood, routine, conventional activity such as data gathering under MPEP 2106.05(d)) and that the deriving of the solution to the target problem again is performed by using the acquired machine-trained model (mere instructions to apply the exception using generic computer components as discussed in MPEP 2106.05(f)). Therefore, in view of the considerations set forth in MPEP 2106.04(d), 2106.05(a)-(c) and (e)-(h), the additional elements as recited in the dependent claims discussed above alone or in combination do not integrate the judicial exception into a practical application as they are mere insignificant extra solution activity, combined with implementing the abstract idea using generic computer components, and limitations describing a field of use or technological environment. The additional elements as discussed above, in combination with the abstract idea, are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination with generic computer functions and components used to implement the abstract idea, and limitations describing a field of use or technological environment. Claim Rejections – 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7, 8, and 11-16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bizzarri et al. (US 20240187879 A1). With respect to claims 1, 15, and 16, Bizzarri teaches a system for solving a target problem comprising: at least one processor; and a memory configured to store program code and a machine-trained model associated with a target problem, the program code comprising setting code, selecting code, first determining code, and second determining code configured to cause the at least one processor to perform corresponding steps of a method; non-transitory computer-readable recording medium storing program code executable by at least one processor, the program code comprising setting code, selecting code, first determining code, and second determining code configured to cause the at least one processor to perform the corresponding steps of the method (e.g. paragraph 0013, apparatus including memory and processor that executes procedure in the memory; paragraph 0080, network optimization problem to be solve having huge number of potential candidate solutions to be explored; paragraph 0091-0092, optimizing using hardware/software solutions; simulator executed using hardware/software; paragraph 0098, using MCTS to solve game tree; paragraph 0111, applying MCTS approach to solution of CCO problem); and the method for solving a target problem using a machine-trained model, the method being performed by at least one computing device and comprising: setting at least one current search node on a search tree corresponding to a solution space of a target problem (e.g. paragraph 0103-0107, Fig. 4A, MCTS algorithm iteratively building search tree, with four steps of selection phase starting from root node, path following through child nodes of search tree until leaf node 430 is reached; paragraph 0122, Fig. 5, in selection phase 505, creating path of nodes through search tree, selecting child nodes 427 by applying node selection policy until tree leaf node 430 is reached); selecting candidate search nodes from among child nodes of the at least one current search node, a number of the candidate search nodes being equal to a number of items inferred by a machine-trained model (e.g. paragraph 0103-0107, Fig. 4A-B, in selection phase each child node 427 is selected by applying formula as shown in Fig. 4B; Fig. 4C, expanding leaf node 430 by selecting new child nodes 435 to add to the search tree according to valid possible actions in that state; paragraph 0119, augmenting tree search with machine learning model (MLM); MLM assigns values to the new child nodes in the expansion phase; paragraphs 0123-0124, Fig. 5, at tree leaf node 430, expansion phase 510 expands the search tree by adding new child nodes 515 to leaf node 430, according to all valid actions in the state of the leaf node 430; if child nodes identified, MLM module 520 generates/predicts the values assigned to different configurations corresponding to the added child nodes 515, approximating the values that would be calculated by running real simulations; predicted KPIs used to estimate value of each added child node 515; paragraph 0126, selecting those added child nodes 515 having high/most promising predicted values for submission to simulator; paragraph 0137, calculated values of MLM for child nodes provide reliable ranking of newly added child nodes, guaranteeing a considerable boost in the selection phase; i.e. the model/MLM predicts/infers a given number of child nodes having a high potential value and this number of the child nodes is selected as candidate nodes for simulation); determining at least one next search node from among the candidate search nodes based on results of search simulation for the candidate search nodes (e.g. paragraph 0103-0107, Figs. 4A-C, in playout/simulation/rollout phase 415, newly added child nodes simulated until terminal state is reached; paragraph 0126, selecting those added child nodes 515 having high/most promising predicted values for submission to simulator; paragraph 0144, best valued child node selected and corresponding configuration submitted to simulation by simulator 210; paragraph 0145, simulator 210 runs simulation; paragraph 0148, suitable goodness assessed for at least one new child node/optimization model assesses that no further improvements in optimization are achieved or predetermined number of iterations reached); and determining a solution to the target problem based on a result of a search using the at least one next search node (e.g. paragraph 0103-0107, Figs. 4A-C, simulating child nodes until terminal state is reached; value of terminal state can be numerical representation of the outcome i.e. win or loss reached during the playout phase 420; paragraph 0140, optimal configuration chosen by the optimization system; paragraph 0148, suitable goodness assessed for new tree child node; paragraph 0149, deploying found best configuration). With respect to claim 7, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the selecting the candidate search nodes comprises selecting the candidate search nodes based on confidence scores of items acquired as a result of inferencing performed by the machine-trained model (e.g. paragraph 0124-0125, MLM module predicting estimated values of KPIs for configurations corresponding to added child nodes, as shown in Fig. 5; values estimated by MLM module for added child notes ranging from -1 to +1, indicating a scale indicating values which are “really bad,” “bad,” “fair,” “fairly good,” “good,” “very good,” and “super good”; i.e. where the predicted estimate provides an indication of a degree of confidence that a given solution will be a “good” solution and/or likely to provide a “good” result). With respect to claim 8, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the selecting the candidate search nodes comprises: performing sampling using confidence scores of items acquired as a result of inferencing performed by the machine-trained model; and selecting the candidate search nodes based on a result of the sampling (e.g. paragraph 0117, sampling moves to reach terminal state; paragraph 0122, applying node selection policy, such as Thompson sampling; paragraph 0124-0125, MLM module predicting estimated values of KPIs for configurations corresponding to added child nodes, as shown in Fig. 5; values estimated by MLM module for added child notes ranging from -1 to +1, indicating a scale indicating values which are “really bad,” “bad,” “fair,” “fairly good,” “good,” “very good,” and “super good”; paragraph 0126, selecting those added child nodes having high/most promising predicted value; i.e. where the predicted estimate provides an indication of a degree of confidence that a given solution will be a “good” solution and/or likely to provide a “good” result, and a portion of the child nodes is selected/sampled from the overall group based on this score). With respect to claim 11, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the determining the at least one next search node comprises: deriving predicted paths for the candidate search nodes by performing search simulation, which selects the at least one next search node based on confidence scores of items acquired as a result of inferencing performed by the machine-trained model (e.g. paragraph 0103-0107, Fig. 4A-B, in selection phase each child node 427 is selected by applying formula as shown in Fig. 4B; Fig. 4C, expanding leaf node 430 by selecting new child nodes 435 to add to the search tree according to valid possible actions in that state; paragraph 0119, augmenting tree search with machine learning model (MLM); MLM assigns values to the new child nodes in the expansion phase; paragraph 0108, starting from root nodes, child nodes in path are determined; paragraphs 0123-0124, Fig. 5, at tree leaf node 430, expansion phase 510 expands the search tree by adding new child nodes 515 to leaf node 430, according to all valid actions in the state of the leaf node 430; if child nodes identified, MLM module 520 generates/predicts the values assigned to different configurations corresponding to the added child nodes 515, approximating the values that would be calculated by running real simulations; predicted KPIs used to estimate value of each added child node 515; paragraph 0126, selecting those added child nodes 515 having high/most promising predicted values for submission to simulator); evaluating predicted solutions corresponding to the predicted paths using an evaluation function associated with the target problem; and determining the at least one next search node from among the candidate search nodes base on results of the evaluating (e.g. paragraph 0103-0107, Figs. 4A-C, simulating child nodes until terminal state is reached; value of terminal state can be numerical representation of the outcome i.e. win or loss reached during the playout phase 420; paragraph 0124, predicted KPI values corresponding to added child nodes calculated by applying reward function; paragraph 0126, added child nodes having high/promising predicted values selected; paragraph 0140, optimal configuration chosen by the optimization system; paragraph 0148, suitable goodness assessed for new tree child node; paragraph 0149, deploying found best configuration). With respect to claim 12, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the determining the at least one next search node comprises: evaluating values of the candidate search nodes via sampling-based search simulation (e.g. paragraph 0103-0107, Figs. 4A-C, simulating child nodes until terminal state is reached; value of terminal state can be numerical representation of the outcome i.e. win or loss reached during the playout phase 420; paragraph 0117, sampling moves to reach terminal state; paragraph 0122, applying node selection policy, such as Thompson sampling; paragraph 0124, predicted KPI values corresponding to added child nodes calculated by applying reward function; paragraph 0126, selection (i.e. sampling) of only those child nodes having high/promising predicted values for submission to the network simulator; receiving simulated KPIs along with value of reward function); and determining the at least one next search node from among the candidate search nodes base on the evaluated values (e.g. paragraph 0126, added child nodes having high/promising predicted values selected; paragraph 0140, optimal configuration chosen by the optimization system; paragraph 0148, suitable goodness assessed for new tree child node; paragraph 0149, deploying found best configuration), and wherein the evaluating the values of the candidate search nodes comprises: deriving a plurality of predicted paths for a particular candidate search node by repeatedly performing the search simulation using, as sampling probabilities, confidence scores of items acquired as a result of inferencing performed by the machine-trained model (e.g. paragraph 0103-0107, Fig. 4A-B, in selection phase each child node 427 is selected by applying formula as shown in Fig. 4B; Fig. 4C, expanding leaf node 430 by selecting new child nodes 435 to add to the search tree according to valid possible actions in that state; paragraph 0119, augmenting tree search with machine learning model (MLM); MLM assigns values to the new child nodes in the expansion phase; paragraph 0108, starting from root nodes, child nodes in path are determined; paragraphs 0123-0124, Fig. 5, at tree leaf node 430, expansion phase 510 expands the search tree by adding new child nodes 515 to leaf node 430, according to all valid actions in the state of the leaf node 430; if child nodes identified, MLM module 520 generates/predicts the values assigned to different configurations corresponding to the added child nodes 515, approximating the values that would be calculated by running real simulations; predicted KPIs used to estimate value of each added child node 515; paragraph 0126, selecting those added child nodes 515 having high/most promising predicted values for submission to simulator); evaluating predicted solutions corresponding to the plurality of predicted paths using an evaluation function associated with the target problem (e.g. paragraph 0103-0107, Figs. 4A-C, simulating child nodes until terminal state is reached; value of terminal state can be numerical representation of the outcome i.e. win or loss reached during the playout phase 420; paragraph 0124, predicted KPI values corresponding to added child nodes calculated by applying reward function); and determining a value of the particular candidate search node based on results of the evaluating the predicted solutions (e.g. paragraph 0126, added child nodes having high/promising predicted values selected; paragraph 0140, optimal configuration chosen by the optimization system; paragraph 0148, suitable goodness assessed for new tree child node; paragraph 0149, deploying found best configuration). With respect to claim 13, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the at least one next search node includes a first node and a second node (e.g. paragraph 0103, iteratively building search tree through four steps/phases per iteration; paragraph 0106, Fig. 4C, expanding leaf node 430 by selecting new child nodes 435 to add to the search tree according to valid possible actions in that state; paragraphs 0123-0124, Fig. 5, at tree leaf node 430, expansion phase 510 expands the search tree by adding new child nodes 515 to leaf node 430, according to all valid actions in the state of the leaf node 430; paragraph 0126, selecting those child nodes 515 having high/most promising predicted values i.e. multiple child/next search nodes may be selected, either within a single iteration or across multiple iterations), and wherein the determining the solution to the target problem comprises: deriving a first path and a second path passing through the first node and the second node, respectively, on the search tree (e.g. paragraph 0103-0107, Fig. 4A-B, iteratively building search tree through four steps/phases per iteration; in selection phase each child node 427 is selected by applying formula as shown in Fig. 4B; Fig. 4C, expanding leaf node 430 by selecting new child nodes 435 to add to the search tree according to valid possible actions in that state; paragraph 0119, augmenting tree search with machine learning model (MLM); MLM assigns values to the new child nodes in the expansion phase; paragraph 0108, starting from root nodes, child nodes in path are determined; paragraphs 0123-0124, Fig. 5, at tree leaf node 430, expansion phase 510 expands the search tree by adding new child nodes 515 to leaf node 430, according to all valid actions in the state of the leaf node 430; if child nodes identified, MLM module 520 generates/predicts the values assigned to different configurations corresponding to the added child nodes 515, approximating the values that would be calculated by running real simulations; predicted KPIs used to estimate value of each added child node 515; paragraph 0126, selecting those added child nodes 515 having high/most promising predicted values for submission to simulator; i.e. paths corresponding to the multiple child nodes may be derived, either within a single iteration or across multiple iterations); evaluating solutions corresponding to the first path and the second path using an evaluation function associated with the target problem; and determining the solution to the target problem based on results of the evaluating (e.g. paragraph 0103-0107, Figs. 4A-C, simulating child nodes until terminal state is reached; value of terminal state can be numerical representation of the outcome i.e. win or loss reached during the playout phase 420; paragraph 0124, predicted KPI values corresponding to added child nodes calculated by applying reward function; paragraph 0126, added child nodes having high/promising predicted values selected; paragraph 0140, optimal configuration chosen by the optimization system; paragraph 0148, suitable goodness assessed for new tree child node; paragraph 0149, deploying found best configuration; i.e. solutions corresponding to the paths of the selected nodes are evaluated either within a single iteration or across multiple iterations, and then an optimal solution is selected from among these for deployment). With respect to claim 14, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches the method further comprising: acquiring an additionally-trained machine-trained model using the determined solution to the target problem; and deriving the solution to the target problem again using the acquired machine-trained model (e.g. paragraphs 0128-0130, all configurations proposed by tree search algorithm submitted to simulation by simulator along with respective KPI value are gathered and stored in cache used to train the MLM; at the very start of the search tree the data cache is empty; gradually new data added to the cache; MLM is retrained when number of configurations gathered; MLM is updated to be as accurate as possible; retraining MLM at various frequencies; at the very beginning the dataset in the database is scarce and accuracy of MLM may not be very reliable, but as more data is gathered and stored in the database, the MLM will increase the accuracy of its predictions; i.e. as data is accumulated through using the system (including the MLM), the MLM is retrained on the data and utilized in subsequent uses of the system, analogous to acquiring an additionally-trained (retrained) model using determined solutions (from previous usage of the system) and deriving a solution again using he acquired model (i.e. using the retrained model again after it is retrained)). Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102€, (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Bizzarri in view of Huang et al. (US 20210150771 A1). With respect to claim 2, Bizzarri teaches all of the limitations of claim 1 as previously discussed, and further teaches wherein the machine-trained model is configured to perform inferencing in a regressive manner (e.g. paragraph 0128, MLM 520 is a regressor). Bizzarri does not explicitly disclose that the machine-trained model is configured to perform inferencing in an autoregressive manner. However, Huang teaches that the machine-trained model is configured to perform inferencing in an autoregressive manner (e.g. paragraph 0045, entropy model is an autoregressive model that is applied along tree traversal path from root to each node; resulting output is estimated distribution for each node). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri and Huang in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search), to incorporate the teachings of Huang (directed to encoding octree structured point cloud data) to include the capability to configure the model (i.e. the regressor of Bizzarri) to perform inferencing in an autoregressive manner. One of ordinary skill would have been motivated to perform such a modification in order to increase the speed and accuracy of data analysis as described in Huang (paragraph 0078). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Bizzarri in view of Graepel et al. (US 20180032864 A1). With respect to claim 9, Bizzarri teaches all of the limitations of claim 1 as previously discussed. Bizzarri does not explicitly disclose wherein the selecting the candidate search nodes comprises selecting the candidate search nodes using another machine-trained model, and wherein the another machine-trained model is a model trained to receive information of the child nodes and infer the candidate search nodes based on the received information. However, Graepel teaches wherein the selecting the candidate search nodes comprises selecting the candidate search nodes using another machine-trained model, and wherein the another machine-trained model is a model trained to receive information of the child nodes and infer the candidate search nodes based on the received information (e.g. paragraph 0083, indicating that reinforcement learning system 100 of Fig. 1 can perform process 400 of Fig. 4 for performing search of state tree; paragraph 0085, system selecting actions to be performed by agent to interact with environment by traversing state tree until reaching leaf state/node; paragraphs 0087-0089, for each outgoing edge from in-tree node, determining adjusted action score for the edge based on action score for edge, visit count for the edge, and prior probability; computing adjusted action score for given edge by adding to action score a bonus that is proportional to prior probability but decays with repeated visits to encourage exploration; selecting action represented by edge with highest adjusted action score; system continues selecting actions to be performed until reaching leaf state/leaf node in the state tree; paragraph 0090-0091, system expands leaf node using policy neural network by adding respective new edge for each valid action, and determines posterior probability for each new edge using policy neural network; paragraph 0092, evaluating leaf node using neural network to generate leaf evaluation score for the leaf node; paragraph 0094, evaluating leaf node with neural network by performing rollout until reaching terminal state by selecting actions to be performed; neural network trained to receive the rollout data to generate respective rollout action probability for each action in set of possible actions; then selecting the action having a highest rollout action probability as the action to be performed; alternatively, sampling from possible actions in accordance with probabilities to select the action to be performed by the agent; i.e. a trained model/neural network receives information regarding the leaf node and infers the set of possible/candidate actions (represented by child nodes of the leaf node), including corresponding probabilities, and selects one of these). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri and Graepel in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search), to incorporate the teachings of Graepel (directed to selecting actions to be performed by a reinforcement learning agent using tree search) to include the capability to use another model/neural network, trained to perform selection of candidate nodes based on received information of child nodes, to perform the selecting of the candidate search nodes (as taught by Graepel). One of ordinary skill would have been motivated to perform such a modification in order to select actions to be performed even when a state tree is too large to be exhaustively searched, with reduced computing resource and time requirements as described in Graepel (paragraph 0007). Claims 3, 5, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Bizzarri in view of Hackett (US 20230128776 A1). With respect to claim 3, Bizzarri teaches all of the limitations of claim 1 as previously discussed. Bizzarri does not explicitly disclose wherein the setting the at least one current search node comprises setting a plurality of current search nodes, the plurality of current search nodes being on a same level on the search tree . However, Hackett teaches wherein the setting the at least one current search node comprises setting a plurality of current search nodes, the plurality of current search nodes being on a same level on the search tree (e.g. paragraphs 0035, 0038, processing all decision nodes in parallel at once; processing all leaf nodes in parallel at once; paragraph 0041, decision tree including hierarchy of decision nodes with leaf node residing at the terminus of each path through the decision tree; paragraph 0042, using paths through decision tree for each leaf node of the plurality of leaf nodes to infer whether the leaf node is selected; all decision node comparisons determined, and the decision nodes can be processed in parallel; paragraph 0043, the entire set of leaf nodes of the decision tree can be processed in parallel; paragraph 0044, Fig. 3A, illustrating decision tree including decision nodes and leaf nodes; i.e. where multiple nodes of the decision/search tree are to be processed in parallel, at least a subset of the multiple nodes processed in parallel will be nodes on the same level/depth of the tree). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri and Hackett in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search), to incorporate the teachings of Hackett (directed to parallel inference processing by decision tree leaf nodes) to include the capability to search a plurality of the nodes of the tree in parallel, such that the current search node includes a plurality of search nodes on the same level. One of ordinary skill would have been motivated to perform such a modification in order to reduce latency of decision tree inference and provide rapid turnaround of result of the decision tree, leading to higher processing throughput as described in Hackett (paragraph 0043). With respect to claim 5, Bizzarri teaches all of the limitations of claim 1 as previously discussed,. Bizzarri does not explicitly disclose wherein the at least one current search node includes a first node and a second node, and a search of a first subtree having the first node as its root node and a search of a second subtree having the second node as its root node are performed in parallel. However Hackett teaches wherein the at least one current search node includes a first node and a second node, and a search of a first subtree having the first node as its root node and a search of a second subtree having the second node as its root node are performed in parallel (e.g. paragraphs 0035, 0038, processing all decision nodes in parallel at once; processing all leaf nodes in parallel at once; paragraph 0041, decision tree including hierarchy of decision nodes with leaf node residing at the terminus of each path through the decision tree; paragraph 0042, using paths through decision tree for each leaf node of the plurality of leaf nodes to infer whether the leaf node is selected; all decision node comparisons determined, and the decision nodes can be processed in parallel; paragraph 0043, the entire set of leaf nodes of the decision tree can be processed in parallel; paragraph 0044, Fig. 3A, illustrating decision tree including decision nodes and leaf nodes). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri and Hackett in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search), to incorporate the teachings of Hackett (directed to parallel inference processing by decision tree leaf nodes) to include the capability to search a plurality of the nodes of the tree in parallel. One of ordinary skill would have been motivated to perform such a modification in order to reduce latency of decision tree inference and provide rapid turnaround of result of the decision tree, leading to higher processing throughput as described in Hackett (paragraph 0043). With respect to claim 10, Bizzarri teaches all of the limitations of claim 1 as previously discussed. Bizzarri does not explicitly disclose wherein the candidate search nodes include a first candidate search node and a second candidate search node, and search simulation for the first candidate search node and search simulation for the second candidate search node are performed in parallel. However, Hackett teaches wherein the candidate search nodes include a first candidate search node and a second candidate search node, and search simulation for the first candidate search node and search simulation for the second candidate search node are performed in parallel (e.g. paragraphs 0035, 0038, processing all decision nodes in parallel at once; processing all leaf nodes in parallel at once; paragraph 0041, decision tree including hierarchy of decision nodes with leaf node residing at the terminus of each path through the decision tree; paragraph 0042, using paths through decision tree for each leaf node of the plurality of leaf nodes to infer whether the leaf node is selected; all decision node comparisons determined, and the decision nodes can be processed in parallel; paragraph 0043, the entire set of leaf nodes of the decision tree can be processed in parallel; paragraph 0044, Fig. 3A, illustrating decision tree including decision nodes and leaf nodes). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri and Hackett in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search), to incorporate the teachings of Hackett (directed to parallel inference processing by decision tree leaf nodes) to include the capability to simulate search a plurality of the nodes of the tree in parallel. One of ordinary skill would have been motivated to perform such a modification in order to reduce latency of decision tree inference and provide rapid turnaround of result of the decision tree, leading to higher processing throughput as described in Hackett (paragraph 0043). Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Bizzarri in view of Hackett, further in view of Dvoretzki et al. (US 9391738 B2). With respect to claim 4, Bizzarri in view of Hackett teaches all of the limitations of claim 3 as previously discussed. Bizzarri and Hackett do not explicitly disclose wherein a number of next search nodes is equal to a number of the plurality of current search nodes. However, Dvoretzki teaches wherein a number of next search nodes is equal to a number of the plurality of current search nodes (e.g. Fig. 3, showing that, for a given number of current search node at a higher level, the number of next nodes selected at the corresponding lower level is equal; col. 8 line 66-col. 9 line 10, searching tree graph 300 (of Fig. 3) in parallel executing ith level branch decision process and an i-1th level confirmation process; nodes in a path having an accumulated distance greater than or equal to search radius may be pruned; nodes marked in Fig. 3 with shading have been selected; unmarked nodes in Fig. 3 have not been selected because they are not a candidate for selection or were pruned; i.e. as shown in Fig 3, for each selected node at a given level, such as the four nodes 326, 340, 350, and 354 in level 3 (where these may be searched in parallel), an equal number of nodes (four) is selected as next nodes at the next level, such as nodes 336, 346, 352, and 356). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri, Hackett, and Dvoretzki in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search) and Hackett (directed to parallel inference processing by decision tree leaf nodes), to incorporate the teachings of Dvoretzki (directed to accelerating a decoder for searching and decoding a tree graph) to include the capability to set the number of next search nodes to be equal to a number of current search nodes. One of ordinary skill would have been motivated to perform such a modification in order to mitigate the inherent sequential nature of conventional decoders to parallelize the branch decision and confirmation steps, allowing the decoder to execute the branch decision by predicting or guessing the correct node without waiting for the confirmation step, thereby using only a single clock cycle as described in Dvoretzki (col. 6 line 61-col. 7 line 2). With respect to claim 6, Bizzarri teaches all of the limitations of claim 1 as previously discussed. Bizzarri does not explicitly disclose wherein the at least one current search node includes a first node and a second node that are on a same level on the search tree, and. However, Hackett teaches wherein the at least one current search node includes a first node and a second node that are on a same level on the search tree (e.g. paragraphs 0035, 0038, processing all decision nodes in parallel at once; processing all leaf nodes in parallel at once; paragraph 0041, decision tree including hierarchy of decision nodes with leaf node residing at the terminus of each path through the decision tree; paragraph 0042, using paths through decision tree for each leaf node of the plurality of leaf nodes to infer whether the leaf node is selected; all decision node comparisons determined, and the decision nodes can be processed in parallel; paragraph 0043, the entire set of leaf nodes of the decision tree can be processed in parallel; paragraph 0044, Fig. 3A, illustrating decision tree including decision nodes and leaf nodes; i.e. where multiple nodes of the decision/search tree are to be processed in parallel, at least two of the nodes processed in parallel will be nodes on the same level/depth of the tree). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri and Hackett in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search), to incorporate the teachings of Hackett (directed to parallel inference processing by decision tree leaf nodes) to include the capability to search a plurality of the nodes of the tree in parallel, such that the current search node includes at least two search nodes on the same level. One of ordinary skill would have been motivated to perform such a modification in order to reduce latency of decision tree inference and provide rapid turnaround of result of the decision tree, leading to higher processing throughput as described in Hackett (paragraph 0043). Bizzarri and Hackett do not explicitly disclose wherein a number of candidate search nodes selected from among child nodes of the first node is equal to a number of candidate search nodes selected from among child nodes of the second node. However, Dvoretzki teaches wherein a number of candidate search nodes selected from among child nodes of the first node is equal to a number of candidate search nodes selected from among child nodes of the second node (e.g. Fig. 3, showing that, for a given set of current search node at a higher level, the number of next nodes selected at the corresponding lower level is equal; col. 8 line 66-col. 9 line 10, searching tree graph 300 (of Fig. 3) in parallel executing ith level branch decision process and an i-1th level confirmation process; nodes in a path having an accumulated distance greater than or equal to search radius may be pruned; nodes marked in Fig. 3 with shading have been selected; unmarked nodes in Fig. 3 have not been selected because they are not a candidate for selection or were pruned; i.e. as shown in Fig 3, for each selected node at a given level, such as the four nodes 326, 340, 350, and 354 in level 3 (where these may be searched in parallel), an equal number of nodes (four) is selected as next nodes at the next level, such as nodes 336, 346, 352, and 356; that is, for each search node at the higher level, a same number of nodes is selected at the next level, such as selecting a single node at the next level for each respective node at the higher level). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Bizzarri, Hackett, and Dvoretzki in front of him to have modified the teachings of Bizzarri (directed to optimizing a mobile communications network using tree-based search) and Hackett (directed to parallel inference processing by decision tree leaf nodes), to incorporate the teachings of Dvoretzki (directed to accelerating a decoder for searching and decoding a tree graph) to include the capability to, for each node at a current/higher level, select a same number of nodes, each corresponding to a respective higher-level node, to search at the next/lower level. One of ordinary skill would have been motivated to perform such a modification in order to mitigate the inherent sequential nature of conventional decoders to parallelize the branch decision and confirmation steps, allowing the decoder to execute the branch decision by predicting or guessing the correct node without waiting for the confirmation step, thereby using only a single clock cycle as described in Dvoretzki (col. 6 line 61-col. 7 line 2). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

May 08, 2023
Application Filed
Dec 29, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month