Prosecution Insights
Last updated: April 19, 2026
Application No. 18/210,412

PROCESSING DEVICE

Non-Final OA §101§103
Filed
Jun 15, 2023
Examiner
MEYER, JACQUELINE CHRISTINE
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +68% interview lift
Without
With
+67.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
24 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §103
DETAILED ACTION This nonfinal office action is responsive to claims filed on June 15, 2023. Claims 1-10 are pending. Claims 1, 9, and 10 are independent. Notice of Pre-AIA or AIA Status The present application, filed on or after June 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on June 16, 2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: Paragraph 0088 reads "Note that the specifying unit 652 may perform processing similar to that performed by the specifying unit 354 described int he first exemplary embodiment based on the structure data, and exclude a candidate value of the unknown feature" but should read “Note that the specifying unit 652 may perform processing similar to that performed by the specifying unit 354 described in the first exemplary embodiment based on the structure data, and exclude a candidate value of the unknown feature.” Appropriate correction is required. Claim Objections Claims 9 and 10 are objected to because of the following informalities: Both claims 9 and 10 read "information procesing device" but should read "information processing device". Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 7, and 9-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claims 1, 9, and 10: Claims 1, 9, and 10 recite processing device, method, and non-transitory computer-readable medium, respectively, and therefore falls within the statutory category of a machine, a process, and a machine, respectively. The claims also recite acquiring score information from a decision tree and specifying a possible range that a value of an unknown feature takes on a basis of the acquired score information. The foregoing can practically be performed in the human mind. For instance, a person is capable of mentally following the nodes and branches of a decision tree to get to a final value to acquire the information from the decision tree. A person is also capable of looking at an unknown feature and mentally specifying a possible range of values for that feature, e.g., the feature is the temperature during a summer day and the person would specify that the possible range of values could be 70°F-95°F. Therefore, these limitations fall within the “mental processes” grouping of abstract ideas. The judicial exception is not integrated into a practical application. The claims recite a decision tree that is a learned model, while claim 1 recites a memory and a processor claims 9 and 10 recite an information processing device and claim 10 recites a non-transitory computer-readable medium which is nothing more than mere instructions to apply the judicial exception using a generic computer. The claims as a whole, looking at the additional elements individually and in combination does not integrate the judicial exception into a practical application. Therefore, the claims are directed to an abstract idea. The claims do not recite additional elements that amount to significantly more than the judicial exception. In particular The claims recite a decision tree that is a learned model, while claim 1 recites a memory and a processor claims 9 and 10 recite an information processing device and claim 10 recites a non-transitory computer-readable medium which is nothing more than mere instructions to apply the judicial exception using a generic computer. The additional elements, taken individually and in combination, do not result in the claims, as a whole, amounting to significantly more than the identified judicial exception. The claims are not patent eligible. Regarding claim 2: Claim 2 further elaborates on the processing device of claim 1. The claim also recites creating a plurality of pieces of candidate data and acquiring the score information by inputting the candidate data into the decision tree. The foregoing can practically be performed in the human mind. For instance, given the range of temperatures specified above, a person is capable of creating possible temperatures for the following day as being 78°F, 82°F, and 91°F. Then, a person could use a decision tree to mentally acquire the score information about each of those possible temperatures, e.g., how likely it is that each of those temperatures would occur the following day. Therefore, these limitations fall within the "mental processes" grouping of abstract ideas. There are no additional elements recited. Regarding claim 3: Claim 3 further elaborates on the processing device of claim 2. The claim also recites specify the possible range that the value of the unknown feature takes by excluding the candidate value on a basis of a value according to a label corresponding to the candidate data. The foregoing can practically be performed in the human mind. A person is specifying a range of values by excluding certain values. For instance, based on the inferred possible temperatures of 78°F, 82°F, and 91°F the possible range of values would be 78-91 which would therefore exclude any values below or above that range. Therefore, these limitations fall within the "mental processes" grouping of abstract ideas. The judicial exception is not integrated into a practical application. The claim recites that the training data includes a plurality of feature values and labels and that the inference data represents a value according to a ratio of a number of pieces of data which is nothing more than generally linking the judicial exception to a particular field of use. The claim also recites that specifying the possible range is done by a processor configured to execute the program instructions which is nothing more than mere instructions to apply the judicial exception using a generic computer. The claim as a whole, looking at the additional elements individually and in combination does not integrate the judicial exception into a practical application. Therefore, the claim is directed to an abstract idea. The claim does not recite additional elements that amount to significantly more than the judicial exception. In particular, claim 3 recites that the training data includes a plurality of feature values and labels and that the inference data represents a value according to a ratio of a number of pieces of data which is nothing more than generally linking the judicial exception to a particular field of use. The claim also recites that specifying the possible range is done by a processor configured to execute the program instructions which is nothing more than mere instructions to apply the judicial exception using a generic computer. The additional elements, taken individually and in combination, do not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The claim is not patent eligible. Regarding claim 4: Claim 4 further elaborates on the processing device of claim 3. The claim also recites excluding the candidate value in which the value according to the label corresponding to the candidate data in the inference result becomes equal to or smaller than a given threshold. The foregoing can practically be performed in the human mind. For instance, a person given a list of candidate values can compare these to a particular threshold and, with the aid of pencil and paper, remove them from the list of candidate values. Therefore, these limitations fall within the "mental processes" grouping of abstract ideas. There are no additional elements recited. Regarding claim 7: Claim 7 further elaborates on the processing device of claim 1 and, therefore, also falls within the "mental processes" grouping of abstract ideas. The judicial exception is not integrated into a practical application. The claim recites instruct which is nothing more than mere instructions to apply the judicial exception using a generic computer. The claim as a whole, looking at the additional elements individually and in combination does not integrate the judicial exception into a practical application. Therefore, the claim is directed to an abstract idea. The claim does not recite additional elements that amount to significantly more than the judicial exception. In particular, claim 7 recites instruct which is nothing more than mere instructions to apply the judicial exception using a generic computer. The additional elements, taken individually and in combination, do not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The claim is not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5, 6, 9, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tsang et al. (Decision Trees for Uncertain Data), hereinafter Tsang, in view of Quinlan (Induction of Decision Trees), hereinafter Quinlan. Regarding claim 1, Tsang teaches the processing device: a memory containing program instructions; and a processor connected to the memory, wherein the processor is configured to execute the program instructions to: (Tsang, page 74, column 2, paragraph 2: “The algorithms described above have been implemented8 in Java using JDK 1.6 and a series of experiments was performed on a PC with an Intel Core 2 Duo 2.66 GHz CPU and 2 GB of main memory, running Linux kernel 2.6.22 i686.”) on a basis of the acquired score information, specifying a possible range that a value of an unknown feature takes, the unknown feature being a part of a plurality of features included in the training data. (Tsang, page 70, column 1, paragraph 1: “We model uncertainty information by fitting appropriate error models on to the point data. For each tuple t i and for each attribute A j , the point value v   i , j reported in a data set is used as the mean of a pdf f i , j , defined over an interval a i , j , b i , j . The range of values for A j (over the whole data set) is noted and the width of a i , j , b i , j is set to w ∙ A j , where A j denotes the width of the range for A j and w is a controlled parameter.” – The range of values for each attribute A is analogous to the possible range that an unknown feature where the attributes A are analogous to the features of the training data. Where the uncertainty information from the pdf is analogous to the score information that was acquired.) Tsang does not explicitly teach: acquire, from a decision tree that is a learned model and includes a plurality of nodes, score information representing a value according to a number of pieces of data that fell to each of the nodes, among a plurality of pieces of training data used for training of the decision tree; and However, Quinlan teaches: acquire, from a decision tree that is a learned model and includes a plurality of nodes, score information representing a value according to a number of pieces of data that fell to each of the nodes, among a plurality of pieces of training data used for training of the decision tree; and (Quinlan, page 98, paragraph 3: “Conceptually, suppose that, along with the object to be classified, we have been passed a token with some value T. In the situation above, each branch of Ai is then explored in turn, using a token of value T • ratio i i.e. the given token value is distributed across all possible values in proportion to the ratios above. The value passed to a branch may be distributed further by subsequent tests on other attributes for which this object has unknown values.” – The token is analogous to the score information that is taught by Tsang above. The value passed to branches is analogous to the nodes of the decision tree which is used to determine the attribute of the unknown values.) Quinlan is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Tsang, which already teaches a processing device to specify a range of values an unknown feature can take from a decision tree but does not explicitly teach acquiring the score information from the nodes of the decision tree, to include the teachings of Quinlan which does teach acquiring the score information from the nodes of the decision tree in order to better classify unknown values with a decision tree at higher rates. (Quinlan, page 99, paragraph 1) Regarding claim 2, Tsang and Quinlan teach the processing device of claim 1, as cited above. Tsang further teaches: create a plurality of pieces of candidate data on a basis of information representing a value of a known feature having been held and information representing a candidate value of the unknown feature; and (Tsang, page 70, column 1, paragraph 1: “To generate the pdf f i , j , we consider two options. The first is uniform distribution, which implies f i , j x = b i , j - a i , j - 1 . The other option is Gaussian distribution5 for which we use 1 4 b i , j - a i , j as the standard deviation. In both cases, the pdf is generated using s sample points in the interval.” – The pdf generated using s sample points is analogous to the candidate data on a basis of information representing a value of a known feature and information representing a candidate of an unknown feature.) acquire the score information by acquiring a plurality of inference results that are inferred as a result of inputting the plurality of pieces of created candidate data to the decision tree, respectively. (Tsang, page 70, column 1, paragraph 2: “As we have explained, under our uncertainty model, classification results are probabilistic. Following [3], we take the class label of the highest probability as the final class label.” – Tsang and Quinlan already teach acquiring the score information, while the class label of the highest probability as the final class label is analogous to using the inferences results to acquire the score information.) Regarding claim 5, Tsang and Quinlan teach the processing device of claim 1, as cited above. Tsang further teaches: specify a leaf node corresponding to the score information including a value that becomes equal to or smaller than a given threshold, and (Tsang, page 66, column 2, paragraph 3: “Each internal node n of a decision tree is associated with an attribute A j n and a split point z n ∈   dom A j n , giving a binary test v 0 , j n ≤ z n . An internal node has exactly two children, which are labeled “left” and “right,” respectively. Each leaf node m in the decision tree is associated with a discrete probability distribution P m over C . For each c ∈ C ,   P m c gives a probability reflecting how likely a tuple assigned to leaf node m would have a class label of c .” – The leaf nodes specifying a probability is analogous to specifying a leaf node corresponding to the score information, while the binary test is analogous to the value being equal to or smaller than a given threshold.) specify the possible range that the value of the unknown feature takes on a basis of the score information corresponding to the node existing on a route between the specified leaf node and a root node that is a first branch in the decision tree. (Tsang, page 66, column 2, paragraph 4: “To determine the class label of a given test tuple t 0 = v 0.1 , ⋯ , v 0 . k , ? , we traverse the tree starting from the root node until a leaf node is reached. When we visit an internal node n , we execute the test v 0 , j n ≤ z n and proceed to the left child or the right child accordingly. Eventually, we reach a leaf node m . The probability distribution P m associated with m gives the probabilities that t 0 belongs to each class label c ∈ C . For a single result, we return the class label c ∈ C that maximizes P m c .” – Following the internal node to a leaf node to return the class label that maximizes P is analogous to the possible range of the value the unknown feature takes by a route between a specified leaf node to a root node.) Tsang does not explicitly teach: acquire the score information by acquiring structure information of the decision tree corresponding to each of the nodes included in the decision tree, the score information representing a value corresponding to a ratio of a number of pieces of data corresponding to each label to the training data, on the node; and However, Quinlan further teaches: acquire the score information by acquiring structure information of the decision tree corresponding to each of the nodes included in the decision tree, (Quinlan, page 86, paragraph 3: “Leaves of a decision tree are class names, other nodes represent attribute-based tests with a branch for each possible outcome. In order to classify an object, we start at the root of the tree, evaluate the test, and take the branch appropriate to the outcome. The process continues until a leaf is encountered, at which time the object is asserted to belong to the class named by the leaf. Taking the decision tree of Figure 2, this process concludes that the object which appeared as an example at the start of this section, and which is not a member of the training set, should belong to class P.”) the score information representing a value corresponding to a ratio of a number of pieces of data corresponding to each label to the training data, on the node; and (Quinlan, page 98, paragraph 1: “One strategy which has been found to work well is as follows. Let A be an attribute with values {A1, A2, ... Av}. For some collection C of objects, let the numbers of objects with value Ai of A be pi and ni, and let pu and nu denote the numbers of objects of class P and N respectively that have unknown values of A. When the information gain of attribute A is assessed, these objects with unknown values are distributed across the values of A in proportion to the relative frequency of these values in C. Thus the gain is assessed as if the true value of Pi were given by pi + pu • ratioi where ratio i = p i + n i ∑ i p i + n i and similarly for ni.”) Regarding claim 6, Tsang and Quinlan teach the processing device of claim 5, as cited above. Tsang further teaches: specify the possible range that the value of the unknown feature takes by checking whether or not there is a node serving as a branch by the unknown feature among the nodes existing on the route between the leaf node and the root node. (Tsang, page 66, column 2, paragraph 4: “To determine the class label of a given test tuple t 0 = v 0.1 , ⋯ , v 0 . k , ? , we traverse the tree starting from the root node until a leaf node is reached. When we visit an internal node n , we execute the test v 0 , j n ≤ z n and proceed to the left child or the right child accordingly. Eventually, we reach a leaf node m . The probability distribution P m associated with m gives the probabilities that t 0 belongs to each class label c ∈ C . For a single result, we return the class label c ∈ C that maximizes P m c .” – Traversing the decision tree from the root node until a leaf node is reached is analogous to checking if there is a node existing on the route between the leaf node and the route node as each internal node being visited and proceeding left or right accordingly will check if there is a node to follow from there. Where specifying the possible range of values based off this is already taught by Tsang above.) Regarding claim 9, claim 9 has all the same limitations of claim 1 which are taught by Tsang and Quinlan – see claim 1 above. Regarding claim 10, claim 10 has all the same limitations of claim 1 which are taught by Tsang and Quinlan – see claim 1 above. Tsang also teaches: A non-transitory computer-readable medium storing thereon a program comprising instructions for causing an information procesing device to execute processing to: (Tsang, page 74, column 2, paragraph 2: “The algorithms described above have been implemented8 in Java using JDK 1.6 and a series of experiments was performed on a PC with an Intel Core 2 Duo 2.66 GHz CPU and 2 GB of main memory, running Linux kernel 2.6.22 i686.”) Claims 3, 4, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Tsang in view of Quinlan in view of Rahman et al. (Missing value imputation using decision trees and decision forests by splitting and merging records: Two novel techniques), hereinafter Rahman. Regarding claim 3, Tsang and Quinlan teach the processing device of claim 2, as cited above. Tsang further teaches: the training data includes a plurality of feature values and labels, (Tsang, page 66, column 2, paragraph 3: “In our model, a data set consists of d training tuples, t 1 , t 2 ,   ⋯ ,   t d , and k numerical (real-valued) feature attributes, A 1 ,   ⋯ ,   A k . The domain of attribute A j is dom A j . Each tuple t i is associated with a feature vector V i = v i , 1 ,   v i , 2 ,   ⋯ ,   v i , k and a class label c i , where v i , j ∈ dom A j and c i   ∈ C , the set of all class labels.”) Tsang does not explicitly teach: the inference result represents a value according to a ratio of a number of pieces of data corresponding to each of the labels to the training data, on a leaf node to which candidate data belongs among the nodes included in the decision tree, and the processor is configured to execute the program instructions to specify the possible range that the value of the unknown feature takes by excluding the candidate value on a basis of a value according to a label corresponding to the candidate data in the inference result. However, Quinlan further teaches: the inference result represents a value according to a ratio of a number of pieces of data corresponding to each of the labels to the training data, on a leaf node to which candidate data belongs among the nodes included in the decision tree, and (Quinlan, page 98, paragraph 1: “One strategy which has been found to work well is as follows. Let A be an attribute with values {A1, A2, ... Av}. For some collection C of objects, let the numbers of objects with value Ai of A be pi and ni, and let pu and nu denote the numbers of objects of class P and N respectively that have unknown values of A. When the information gain of attribute A is assessed, these objects with unknown values are distributed across the values of A in proportion to the relative frequency of these values in C. Thus the gain is assessed as if the true value of Pi were given by pi + pu • ratioi where ratio i = p i + n i ∑ i p i + n i and similarly for ni.”) Tsang and Quinlan do not explicitly teach: the processor is configured to execute the program instructions to specify the possible range that the value of the unknown feature takes by excluding the candidate value on a basis of a value according to a label corresponding to the candidate data in the inference result. However, Rahman teaches: the processor is configured to execute the program instructions to specify the possible range that the value of the unknown feature takes by excluding the candidate value on a basis of a value according to a label corresponding to the candidate data in the inference result. (Rahman, page 64, column 2, paragraph 1: “Whereas the proposed techniques (DMI and SiMI) impute a missing value by using only a group of records (instead of all records) that are similar to the record having the missing value. Another existing technique IBLLS also imputes a missing value by using the similar records. However, it also uses only a subset of all the attributes where the correlation between the attribute having the missing value and each attribute belonging to the subset is high. It completely ignores all other attributes having a correlation lower than a threshold value.” – The missing values being imputed when the correlation is high and ignoring the other attributes with a correlation below a threshold is analogous to specifying the possible range based on excluding candidate values.) Rahman is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Tsang and Quinlan, which already teaches specifying a range of values an unknown feature takes using a decision tree but does not explicitly teach excluding candidate values based on inference results, to include the teachings of Rahman which does teach excluding candidate values based on inference results in order to "improve the imputation accuracy." (Rahman, page 64, column 1, paragraph 3) Regarding claim 4, Tsang, Quinlan and Rahman teach the processing device of claim 3, as cited above. Tsang and Quinlan do not explicitly teach: specify the possible range that the value of the unknown feature takes by excluding the candidate value in which the value according to the label corresponding to the candidate data in the inference result becomes equal to or smaller than a given threshold. However, Rahman further teaches: specify the possible range that the value of the unknown feature takes by excluding the candidate value in which the value according to the label corresponding to the candidate data in the inference result becomes equal to or smaller than a given threshold. (Rahman, page 64, column 2, paragraph 1: “Whereas the proposed techniques (DMI and SiMI) impute a missing value by using only a group of records (instead of all records) that are similar to the record having the missing value. Another existing technique IBLLS also imputes a missing value by using the similar records. However, it also uses only a subset of all the attributes where the correlation between the attribute having the missing value and each attribute belonging to the subset is high. It completely ignores all other attributes having a correlation lower than a threshold value.”) Regarding claim 7, Tsang and Quinlan teach the processing device of claim 1, as cited above. Tsang and Quinlan do not explicitly teach: instruct how to output the score information by the decision tree on a basis of a result of specifying. However, Rahman teaches: instruct how to output the score information by the decision tree on a basis of a result of specifying. (Rahman, page 55, algorithm 1: second line and Step 5 show that the algorithm to impute missing values provides instructions on how that information is output.) Rahman is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Tsang and Quinlan, which already teaches acquiring score information using a decision tree but does not explicitly teach instructions on how to output the score information, to include the teachings of Rahman which does teach instructions on how to output the score information. Since a user is able to compare imputation accuracies, being able to view the output of the score information would allow them to compare the results. (Rahman, page 54, bottom of column 1 and top of column 2) Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Tsang in view of Quinlan in view of Fredrikson et al. (Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures), hereinafter Fredrikson. Fredrikson was cited in applicant’s IDS submitted on June 15, 2023. Regarding claim 8, Tsang and Quinlan do not explicitly teach: perform risk assessment of the decision tree on a basis of a result of specifying. However, Fredrikson teaches: perform risk assessment of the decision tree on a basis of a result of specifying. (Fredrikson, page 6, column 1, paragraph 2: “To evaluate the risk posed by existing, publicly-available models, we used trees obtained from the BigML service [4]. Because the sensitive feature in each dataset is binary-valued, we use precision and recall to measure the effectiveness of the attack. In our setting, precision measures the fraction of predicted “Yes” responses that are correct, and recall measures the fraction of “Yes” responses in the dataset that are predicted by the adversary. We also measure the accuracy which is defined as the fraction of correct responses.”) Fredrikson is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Tsang and Quinlan, which already teaches specifying a possible range of values of an unknown feature but does not explicitly teach performing risk assessment based off the specifying, to include the teachings of Fredrikson which does teach performing risk assessment based off the specifying in order to "avoid these kinds of MI attacks with negligible degradation to utility." (Fredrikson, abstract) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Qin et al. (DTU: A Decision Tree for Uncertain Data) Hulett (Decision tree analysis for the risk averse organization) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACQUELINE MEYER whose telephone number is (703)756-5676. The examiner can normally be reached M-F 8:00 am - 4:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.C.M./Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Jun 15, 2023
Application Filed
Mar 23, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585981
MANAGING AN INSTALLED BASE OF ARTIFICIAL INTELLIGENCE MODULES
2y 5m to grant Granted Mar 24, 2026
Patent 12468941
SYSTEMS AND METHODS FOR DYNAMICS-AWARE COMPARISON OF REWARD FUNCTIONS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+67.5%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month