Prosecution Insights
Last updated: April 19, 2026
Application No. 18/071,355

SYSTEM AND METHOD FOR DERIVING A PERFORMANCE METRIC OF AN ARTIFICIAL INTELLIGENCE (AI) MODEL

Non-Final OA §101§102§103§112
Filed
Nov 29, 2022
Examiner
DAY, ROBERT N
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Intelus Inc.
OA Round
1 (Non-Final)
23%
Grant Probability
At Risk
1-2
OA Rounds
4y 3m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
5 granted / 22 resolved
-32.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
38 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the application filed 29 November 2022. Claims 1-20 are pending and have been examined. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the element "the first leaf node" in the limitation "propagating the at least one unlabeled example from the root node to the first leaf node" (emphasis added). There is insufficient antecedent basis for this limitation in the claim. Claim 11 is rejected under the same rationale. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1 Step 1 Claim 1 recites a method, and thus the claimed process falls within a statutory category of invention. Step 2A Prong 1 The claim recites deriving at least one ... metric, which is a mental process. The claim recites estimating a relative size of a first partition of the sample set of examples (E), which is a mental process. The claim recites populating a binary decision tree by adding at least one unlabeled example from the sample set of examples (E) at a root node of the binary decision tree, which is a mental process. The claim recites partitioning the sample set of examples (E) into the first partition that comprises a subset of the sample set of examples (E), which is a mental process and/or a mathematical concept when interpreting partition as an operation on a mathematical model (specification, [0044]: "The partition model may be represented by a mathematical model for data types, which is the binary decision tree. The partition model may be built incrementally as the binary decision tree, and defined by a root node, child nodes and leaf nodes of the binary decision tree"). The claim recites propagating the at least one unlabeled example from the root node to the first leaf node in the binary decision tree, wherein the first partition comprises the subset of the sample set of examples (E) that have propagated to the first leaf node of the binary decision tree, which is a mental process when interpreting the instant propagate as a step of assignment (specification, [0008]: "the at least one unlabeled example is propagated from the root node to the first leaf node by applying a root predicate ... and assigning the at least one unlabeled example to a ... child node"). The claim recites estimating the relative size of the first partition that corresponds to the first leaf node to derive at least one performance metric of the AI model, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The additional element processor-implemented invokes a computer or other machinery merely as a tool to perform an existing process (see MPEP 2106.05(f), "apply it"). The additional element performance metric of an artificial intelligence (AI) model does not amount to more than generally linking the use of a judicial exception to a particular field of use (see MPEP 2106.05(h), "limit the use of the abstract idea to a particular technological environment"). The additional element an artificial intelligence (AI) model that is trained based on a sample set of examples (E) invokes a computer or other machinery merely as a tool to perform an existing process (see MPEP 2106.05(f), "apply it"). The additional element automatically invokes a computer or other machinery merely as a tool to perform an existing process (see MPEP 2106.05(f), "apply it"). The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 2 Step 1 Regarding Claim 2, the rejection of Claim 1 is incorporated. Step 2A Prong 1 The claim recites wherein the at least one unlabeled example that is added to populate the binary decision tree is selected from a plurality of unlabeled examples that are added at the root node of the binary decision tree, which is a mental process. The claim recites the at least one unlabeled example is propagated from the root node to the first leaf node in the binary decision tree by applying a predicate at each parent node along a path from the root node to the first leaf node, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 3 Step 1 Regarding Claim 3, the rejection of Claim [X] is incorporated. Step 2A Prong 1 The claim recites wherein the at least one unlabeled example is propagated from the root node to the first leaf node by applying a root predicate to the unlabeled example at the root node to obtain a logical value, which is a mental process. The claim recites assigning the at least one unlabeled example to a left child node of the root node or a right child node of the root node based on the logical value, which is a mental process. The claim recites iteratively applying predicates to the at least one unlabeled example at each child node that the at least one unlabeled example is assigned to, until the at least one unlabeled example reaches the first leaf node, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 4 Step 1 Regarding Claim 4, the rejection of Claim 2 is incorporated. Step 2A Prong 1 The claim recites propagating the at least one labeled example to a second partition that corresponds to a second leaf node of the binary decision tree to obtain an unbiased label estimate for an intersection of the second partition and the set of examples satisfying the predicate, which is a mental process. The claim recites wherein the at least one performance metric is derived based at least in part on the unbiased label estimate for the second partition, which is a mental process. The claim recites wherein the at least one performance metric is selected from marginale or precision, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 5 Step 1 Regarding Claim 5, the rejection of Claim 4 is incorporated. Step 2A Prong 1 The claim recites modifying the binary decision tree by splitting the second leaf node into a first child leaf node and a second child leaf node based on a second logical value derived from a second predicate that is applied to the at least one labeled example that has propagated to the second leaf node, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 6 Step 1 Regarding Claim 6, the rejection of Claim 1 is incorporated. Step 2A Prong 1 The claim recites wherein the relative size of the first leaf node is estimated by dividing a count of unlabeled examples that have propagated to the first leaf node by a total number of unlabeled examples added at the root node, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 7 Step 1 Regarding Claim 7, the rejection of Claim 1 is incorporated. Step 2A Prong 1 The claim recites wherein if the at least one unlabeled example is propagated from the root node to the first leaf node in the binary decision tree, for each child node that the at least one unlabeled example is assigned to along the path between the root node and the first leaf node, a ratio of unlabeled examples assigned to the each child node to the number of unlabeled examples at its parent node is determined, and a product of the ratio at the each child node is estimated to be the relative size of the first leaf node, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 8 Step 1 Regarding Claim 8, the rejection of Claim 5 is incorporated. Step 2A Prong 1 The claim recites performing an incremental update by propagating a subset of the plurality of unlabeled examples to the second leaf node, which is a mental process. The claim recites estimating a relative size of the first child leaf node after the incremental update is completed, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 9 Step 1 Regarding Claim 9, the rejection of Claim 1 is incorporated. Step 2A Prong 1 The claim recites estimating a count of unlabeled examples to be added to populate the binary decision tree based on a demand for a number of unlabeled examples needed to propagate down to the first leaf node to achieve a preset target minimum of unlabeled examples at the first leaf node, based on a historical proportion split at each node along the path from the root node to the first leaf node, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 10 Step 1 Regarding Claim 10, the rejection of Claim 1 is incorporated. Step 2A Prong 1 The claim recites wherein the at least one performance metric is selected from any of marginale, precision, recall, and F1 score, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Regarding Claim 11 Step 1 Claim 11 recites a system, and thus the claimed process falls within a statutory category of invention. Step 2A Prong 1 The claim recites deriving at least one ... metric, which is a mental process. The claim recites estimating a relative size of a first partition of the sample set of examples (E), which is a mental process. The claim recites populating a binary decision tree by adding at least one unlabeled example from the sample set of examples (E) at a root node of the binary decision tree, which is a mental process. The claim recites partitioning the sample set of examples (E) into the first partition that comprises a subset of the sample set of examples (E), which is a mental process and/or a mathematical concept when interpreting partition as an operation on a mathematical model (specification, [0044]: "The partition model may be represented by a mathematical model for data types, which is the binary decision tree. The partition model may be built incrementally as the binary decision tree, and defined by a root node, child nodes and leaf nodes of the binary decision tree"). The claim recites propagating the at least one unlabeled example from the root node to the first leaf node in the binary decision tree, wherein the first partition comprises the subset of the sample set of examples (E) that have propagated to the first leaf node of the binary decision tree, which is a mental process when interpreting the instant propagate as a step of assignment (specification, [0008]: "the at least one unlabeled example is propagated from the root node to the first leaf node by applying a root predicate ... and assigning the at least one unlabeled example to a ... child node"). The claim recites estimating the relative size of the first partition that corresponds to the first leaf node to derive at least one performance metric of the AI model, which is a mental process. Thus, the claim recites an abstract idea. Step 2A Prong 2, Step 2B The additional element a processor; and a non-transitory computer readable storage medium storing one or more sequences of instructions, which when executed by the processor, performs a method invokes a computer or other machinery merely as a tool to perform an existing process (see MPEP 2106.05(f), "apply it"). The additional element performance metric of an artificial intelligence (AI) model does not amount to more than generally linking the use of a judicial exception to a particular field of use (see MPEP 2106.05(h), "limit the use of the abstract idea to a particular technological environment"). The additional element an artificial intelligence (AI) model that is trained based on a sample set of examples (E) invokes a computer or other machinery merely as a tool to perform an existing process (see MPEP 2106.05(f), "apply it"). The additional element automatically invokes a computer or other machinery merely as a tool to perform an existing process (see MPEP 2106.05(f), "apply it"). The claim lacks additional elements that integrate it into a practical application or provide significantly more, so it is directed to an abstract idea and is ineligible. Claims 12-20, dependent on Claim 11, incorporate the rejection of Claim 11. Claims 12-20 incorporate substantively all the limitations of Claims 12-20, respectively, in system form and are rejected under the same rationales. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 6, 11-13, and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nuti, et al., "An Explainable Bayesian Decision Tree Algorithm" (hereinafter "Nuti"). Regarding Claim 1, Nuti teaches: A processor-implemented method (Nuti, p. 9, 5 Numerical Examples: "The GMT results are generated with Java although we also provide the Python module with integration into scikit-learn") for deriving at least one performance metric of an artificial intelligence (AI) model that is trained based on a sample set of examples (E) (Nuti, p. 9, 5.1 UCI Data Sets: "We test the GMT on some data sets from the University of California, Irvine (UCI) database [9]. We compute the accuracy the DT, RF, and GMT" and p. 9, Table 1: Accuracy of DT, RF, and GMT for several data sets. ... Results are sorted by relative performance, starting from highest accuracy difference between GMT and RF," where Nuti's relative accuracy of RF [random forest] corresponds to the instant performance metric) by estimating a relative size of a first partition of the sample set of examples (E) (Nuti, p. 14, Algorithm 2, Find the modal partition in Ω Π for the classification problem, line 20, which uses the estimated prior distribution γ as c o u n t 0 _ a r r to determine the observations of the lower partition), the method comprising: populating a binary decision tree by adding ... at a root node of the binary decision tree (Nuti, p. 3, 2 Bayesian Trees Overview: "This conditional distribution is assumed to be encoded under a tree structure created following a set of simple recursive rules based on { x i } i = 1 n , namely the tree generation process: starting at the root, we determine if we are to expand the current node with two leaves under a predefined probability") .... at least one unlabeled example from the sample set of examples (E) (Nuti, p. 3, 2 Bayesian Trees Overview: "We define D = { x i , y i } i = 1 n a data set of n independent observations. Points x = x 1 , … , x d in R d describe the features of each observation whose outcome y is randomly sampled from Y x . ... The data set D is sampled from a data generation process. We divide this process into two steps: first, a point x is sampled in R d ; second, the outcome Y x is sampled given x ," where Nuti's points correspond to the instant examples, where the points are sampled and unlabeled when added to the root, as in p. 14, Algorithm 2 Find the modal partition in Ω Π for the classification problem, line 4, "Adds category outcomes"); partitioning the sample set of examples (E) into the first partition that comprises a subset of the sample set of examples (E) (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, lines 11-12, where dataset D is split into two subsets and assigned to a partition, and p. 4, 3 Partition Probability Space: "A partition Π = { M 1 , … , M k } of M divides M into disjoint subsets M 1 , … , M k such that M = ∪ M w . ... To construct binary trees, we reduce the space of partitions Ω p i to the trivial partition Π 0 = { M } , and partitions of the form Π r , m = { { x ∈ M such that  x r ≤ h m } , { x ∈ M such that  x r > h m } } for some real h m and some dimension r   =   1 ,   … ,   d "); propagating the at least one unlabeled example from the root node to the first leaf node in the binary decision tree (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, line 3 and line 10, where FILL_TREE is called with the root node as context, and where D contains unlabeled examples), wherein the first partition comprises the subset of the sample set of examples (E) that have propagated to the first leaf node of the binary decision tree (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, lines 12 and 21, where line 21 indicates that partition Π U is assigned to a leaf node); and automatically (Nuti, p. 9, 5 Numerical Examples: "The GMT results are generated with Java although we also provide the Python module with integration into scikit-learn") estimating the relative size of (Nuti, p. 14, Algorithm 2, Find the modal partition in Ω Π for the classification problem, line 20, which uses the estimated prior distribution γ for c o u n t 0 _ a r r to determine the observations of the lower partition) the first partition that corresponds to the first leaf node (Nuti, p. 3, Bayesian Trees Overview: "given an additional prior on the partition space, we can obtain the probability of each partition given D ... While P Π D is the probability of a partition, the information contained in each partition set is the posterior of the Y parameters. In Figure 1, the posterior distributions are Beta(4, 1)," where partition probability for a leaf is determined at line 6 of p. 15, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability) to derive at least one performance metric of the AI model (Nuti, p. 9, 5.1 UCI Data Sets: "We test the GMT on some data sets from the University of California, Irvine (UCI) database [9]. We compute the accuracy the DT, RF, and GMT," where Nuti's test-time accuracy is computed according to the constructed tree of Alg. 3). Regarding Claim 11, Nuti teaches: A system ... comprising: a processor; and a non-transitory computer readable storage medium storing one or more sequences of instructions, which when executed by the processor, performs a method (Nuti, p. 9, 5 Numerical Examples: "The GMT results are generated with Java although we also provide the Python module with integration into scikit-learn," where a processor and medium are inherent in generating results using Java or Python) comprising: precisely those steps recited by the method of Claim 1. Regarding Claim 2, the rejection of Claim 1 is incorporated. Nuti teaches: wherein the at least one unlabeled example that is added to populate the binary decision tree is selected from a plurality of unlabeled examples that are added at the root node of the binary decision tree (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, line 3 and line 10, where FILL_TREE is called with the root node as context, and where D contains unlabeled examples), and the at least one unlabeled example is propagated from the root node to the first leaf node in the binary decision tree by applying a predicate at each parent node along a path from the root node to the first leaf node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 12-13, where data partition Π L contains an example c h i l d L is a leaf node). Regarding Claim 3, the rejection of Claim 1 is incorporated. Nuti teaches: wherein the at least one unlabeled example is propagated from the root node to the first leaf node ... at the root node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 12-13, where data partition Π L contains an example c h i l d L is a leaf node) by applying a root predicate to the unlabeled example ... to obtain a logical value (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, line 11, where Nuti's split comprises a less-than predicate, as in p. 4, 3 Partition Probability Space: "To construct binary trees, we reduce the space of partitions Ω p i to ... partitions of the form Π r , m = { { x ∈ M such that  x r ≤ h m } , { x ∈ M such that  x r > h m } } for some real h m and some dimension r   =   1 ,   … ,   d "), and assigning the at least one unlabeled example to a left child node of the root node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 13, where Nuti's c h i l d L corresponds to the instant left child) or a right child node of the root node based on the logical value (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, line 16, where Nuti's c h i l d U corresponds to the instant right child), and iteratively applying predicates to the at least one unlabeled example at each child node that the at least one unlabeled example is assigned to, until the at least one unlabeled example reaches the first leaf node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 18 and 23, where Nuti's recursive algorithm with a leaf node as a base case corresponds to the instant iteratively until reaching a leaf). Regarding Claim 6, the rejection of Claim 1 is incorporated. Nuti teaches: wherein the relative size of the first leaf node is estimated by dividing a count of unlabeled examples that have propagated to the first leaf node by a total number of unlabeled examples added at the root node (Nuti, p. 14, Algorithm 2, Find the modal partition in Ω Π for the classification problem, line 20, which uses the estimated prior distribution γ for c o u n t 0 _ a r r to determine the observations of the lower partition, divided by p _ c o u n t , which represents the total number of samples when called on the root node). Claims 12, 13, and 16 incorporate substantively all limitations of Claims 2, 3, and 6, respectively, in system form and are rejected under the same rationales. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 4, 5, 10, 14, 15, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Nuti, et al., "An Explainable Bayesian Decision Tree Algorithm" (hereinafter "Nuti") in view of Tanha, et al., "Semi-supervised self-training for decision tree classifiers" (hereinafter "Tanha"). Regarding Claim 4, the rejection of Claim 2 is incorporated. Nuti teaches: selecting a first predicate (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 11, which splits data according to the comparison predicates) and pseudo-randomly selecting at least one example from the sample set of examples (E) which satisfies the predicate (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, lines 11, which splits data according to the predicates) and labeling the at least one selected example for which a first logical value obtained by applying the first predicate to the at least one example is true to obtain at least one labeled example (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 12 or 15, which call FIND_MODAL_PARTITION of p. 14, Algorithm 2 Find the modal partition in Ω Π for the classification problem, which assigns an outcome at lines 4-6, where Nuti's outcome corresponds to the instant label); and propagating the at least one labeled example to a second partition that corresponds to a second leaf node of the binary decision tree (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, lines 19 and 24, which create further partitions and eventually a leaf node in the base case of the recursion) to obtain an unbiased label estimate for an intersection of the second partition and the set of examples satisfying the predicate (Nuti, p. 3, 2 Bayesian Trees Overview: "The data set D is sampled from a data generation process. We divide this process into two steps: first, a point x is sampled in R d ; second, the outcome Y x is sampled given x .... All points in the same set of a partition Π share the same outcome distribution, i.e. Y does not depend on x ," where Nuti's independent observation corresponds to the instant unbiased), wherein the at least one performance metric is derived based at least in part on the unbiased label estimate for the second partition (Nuti, p. 9, 5.1 UCI Data Sets: "We test the GMT on some data sets from the University of California, Irvine (UCI) database [9]. We compute the accuracy the DT, RF, and GMT," where Nuti's test-time accuracy is computed according to the constructed tree of Alg. 3, using the unbiased observation samples). Nuti teaches wherein the at least one performance metric is accuracy. Nuti does not explicitly teach wherein the at least one performance metric is selected from marginale or precision. However, Tanha teaches: wherein the at least one performance metric is selected from marginale or precision (Tanha, p. 368, 8 Multiclass classification: "Table 10 compares the results of the standard decision tree learner (DT) to its self-training version (ST-DT) and the same for ... ensemble classifiers RF.... We further present the precision (P), recall (R), and the area under curve (AUC) of some datasets" and p. 369, Table 11, Detailed classification accuracy by classes for Balance and Cmc datasets using self-training with single and ensemble classifiers, containing columns for precision (P)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nuti regarding wherein the at least one performance metric is accuracy with those of Tanha regarding wherein the at least one performance metric is selected from marginale or precision. The motivation to do so would be to facilitate demonstrating further aspects of performance for AI models (Tanha, p. 368, 8 Multiclass classification: "We further present the precision (P), recall (R), and the area under curve (AUC) of some datasets in terms of each class to show the improvements in more details"). Regarding Claim 5, the rejection of Claim 4 is incorporated. The Nuti/Tanha combination teaches: modifying the binary decision tree by splitting the second leaf node into a first child leaf node and a second child leaf node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 23-24, where Nuti's child node is split during the recursive call into c h i l d L and c h i l d U ) based on a second logical value derived from a second predicate that is applied to the at least one labeled example that has propagated to the second leaf node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non- normalized log-probability, lines 15-16, where data partition Π U is created according to the greater-than predicate, as in p. 4, 3 Partition Probability Space: "To construct binary trees, we reduce the space of partitions Ω p i to ... partitions of the form Π r , m = { { x ∈ M such that  x r ≤ h m } , { x ∈ M such that  x r > h m } } for some real h m and some dimension r   =   1 ,   … ,   d "). Regarding Claim 10, the rejection of Claim 1 is incorporated. Nuti teaches wherein the at least one performance metric is accuracy. Nuti does not explicitly teach wherein the at least one performance metric is selected from any of marginale, precision, recall, and F1 score. However, Tanha teaches: wherein the at least one performance metric is selected from any of marginale, precision, recall, and F1 score (Tanha, p. 368, 8 Multiclass classification: "Table 10 compares the results of the standard decision tree learner (DT) to its self-training version (ST-DT) and the same for ... ensemble classifiers RF.... We further present the precision (P), recall (R), and the area under curve (AUC) of some datasets" and p. 369, Table 11, Detailed classification accuracy by classes for Balance and Cmc datasets using self-training with single and ensemble classifiers, containing columns for precision (P)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nuti regarding wherein the at least one performance metric is accuracy with those of Tanha regarding wherein the at least one performance metric is selected from any of marginale, precision, recall, and F1 score. The motivation to do so would be to facilitate demonstrating further aspects of performance for AI models (Tanha, p. 368, 8 Multiclass classification: "We further present the precision (P), recall (R), and the area under curve (AUC) of some datasets in terms of each class to show the improvements in more details"). Claims 14, 15, and 20 incorporate substantively all limitations of Claims 4, 5, and 10, respectively, in system form and are rejected under the same rationales. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Nuti, et al., "An Explainable Bayesian Decision Tree Algorithm" (hereinafter "Nuti") in view of Yang, et al, "Incrementally optimized decision tree for noisy big data" (hereinafter "Yang"). Regarding Claim 8, the rejection of Claim 5 is incorporated. Nuti teaches: performing an ... update by propagating a subset of the plurality of unlabeled examples to the second leaf node (Nuti, p. 15, Appendix, Algorithm 3, Fill the Bayesian greedy-modal tree and return the tree non-normalized log-probability, lines 19 and 24, which create further partitions and eventually a leaf node in the base case of the recursion). Nuti teaches performing an incremental update by propagating a subset of the plurality of unlabeled examples to the second leaf node. Nuti does not explicitly teach performing an incremental update ... and estimating ... the first child leaf node after the incremental update is completed. However, Yang teaches: performing an incremental update ... and estimating ... the first child leaf node after the incremental update is completed (Yang, p. 39, 3.1 Functional Tree Leaf: "Naive Bayes classifier chooses the class with the maximum possibility computed by Naïve Bayes, as the predictive class in a leaf. The formula of Naïve Bayes is [Eq. 10] OCD [Observed Class Distribution] of leaf with value x i j is updated incrementally," where Yang's update to the leaf reasonably suggests an update to a previously calculated leaf subsequent to new data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nuti regarding performing an incremental update by propagating a subset of the plurality of unlabeled examples to the second leaf node with those of Yang regarding performing an incremental update and estimating the first child leaf node after the incremental update is completed. The motivation to do so would be to enable use of updated models in domains where data volumes are of a scale not allowing full model re-computation (Yang, p. 37, 2.2 Big Data: "For modern decision model, the data appears in massive format and being generated at a huge scale daily. How to keep the latest model efficiently is an open problem. For frequently updating model, re-computing historical data is not applicable if the database contains large millions of records. In order to solve this trade-off, this paper proposes an incremental tree induction with a multi-objective optimization"). Claim 18 incorporates substantively all limitations of Claim 8 in system form and is rejected under the same rationale. Conclusion Claims 7, 9, 17, and 19 are rejected only under 35 U.S.C. 101 and are not rejected under 35 U.S.C. 102. A complete search of Claims 7, 9, 17, and 19 did not uncover any prior art that teaches or fairly suggests: 7. The processor-implemented method of claim 1, wherein if the at least one unlabeled example is propagated from the root node to the first leaf node in the binary decision tree, for each child node that the at least one unlabeled example is assigned to along the path between the root node and the first leaf node, a ratio of unlabeled examples assigned to the each child node to the number of unlabeled examples at its parent node is determined, and a product of the ratio at the each child node is estimated to be the relative size of the first leaf node. 9. The processor-implemented method of claim 1, further comprising: estimating a count of unlabeled examples to be added to populate the binary decision tree based on a demand for a number of unlabeled examples needed to propagate down to the first leaf node to achieve a preset target minimum of unlabeled examples at the first leaf node, based on a historical proportion split at each node along the path from the root node to the first leaf node. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT N DAY whose telephone number is (703)756-1519. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /R.N.D./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Nov 29, 2022
Application Filed
Jan 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12406181
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR UPDATING MODEL
2y 5m to grant Granted Sep 02, 2025
Patent 12229685
MODEL SUITABILITY COEFFICIENTS BASED ON GENERATIVE ADVERSARIAL NETWORKS AND ACTIVATION MAPS
2y 5m to grant Granted Feb 18, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
23%
Grant Probability
46%
With Interview (+23.2%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month