Prosecution Insights
Last updated: April 19, 2026
Application No. 18/120,816

PREDICTION MODEL TRAINING METHOD, INFORMATION PREDICTION METHOD AND CORRESPONDING DEVICE

Non-Final OA §101§102§103
Filed
Mar 13, 2023
Examiner
WENG, PEI YONG
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
506 granted / 637 resolved
+24.4% vs TC avg
Strong +23% interview lift
Without
With
+23.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
18 currently pending
Career history
655
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 637 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This action is responsive to the following communication: Non-Provisional Application filed Mar. 13, 2023. Claims 1-15 are pending in the case. Claims 1 and 12-15 are independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Regarding Independent Claim 15 2A Prong 1: reciting limitations that are directed to methods of organizing human activity (involving fundamental economic principles or practices including insurance; managing personal behavior or relationships or interactions between people) and/or mental processes (including concepts performed in the human mind or with the help of pen and paper, such as an observation, evaluation, judgment, and/or opinion) and/or mathematical concept (including mathematical relationships, mathematical formulas or equations, mathematical calculations), i.e., reciting judicial exception(s) or abstract idea(s) 15. An information prediction method, (method of organizing human activity and/or mental process), the feature extraction layers configured to extract user features, and the first central point information representing an average user feature of user devices within the first group (method of organizing human activity and/or mental process); obtaining the prediction model corresponding to the user device based on the feature extraction layers, the first central point information and user data of the user device; and predicting information using the obtained prediction model (method of organizing human activity and/or mental process). 2A Prong 2: Additional elements in the claim fail to integrate judicial exception(s) into a practical application Additional elements: executed by a user device (generic system/computer component or software model used as a tool, see MPEP 2106.05(f)); representing an average user feature of user devices (generic system/computer component or software model used as a tool, see MPEP 2106.05(f)); obtaining the prediction model corresponding to the user device based on the feature extraction layers, the first central point information and user data of the user device (generic system/computer component or software model used as a tool, see MPEP 2106.05(f)); The additional elements as disclosed above, e.g., reciting generic computer components, reciting limitations for a specific field of use or technological environment or reciting insignificant extra-solution activity, fail to integrate a judicial exception into a practical application. See MPEP 2106.05(h), MPEP 2106.05(f) and MPEP 2106.05(g) that fail to integrate the judicial exception(s) into practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: executed by a user device (generic system/computer component or software model used as a tool, see MPEP 2106.05(f)); representing an average user feature of user devices (generic system/computer component or software model used as a tool, see MPEP 2106.05(f)); obtaining the prediction model corresponding to the user device based on the feature extraction layers, the first central point information and user data of the user device (generic system/computer component or software model used as a tool, see MPEP 2106.05(f)); The additional elements as disclosed above, which recite activities in a merely generic manner (e.g., at a high level of generality), in a particular technological field or environment, or being insignificant extra solution type, are considered to be well-understood, routine, conventional activities by the courts; thus, the additional elements in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exceptions (see MPEP 2106.05(d)(ll) on reciting activities in MPEP 2106.05 (f), 2106.05(h) and 2106.05(g)). It is unclear that there is any improvement to a computer or technological field in the claim as recited, even though there may be improvement to the abstract idea itself. Therefore, claim 15 is rejected under 35 U.S.C 101 abstract idea without significantly more. Claim Objections Claim 10 is objected to because of the following informalities: Claim 10 recites the term “predict predicting user attribute information.” Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 3-14 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Nakayama et al. (hereinafter Nakayama) U.S. Patent Publication No. 2021/0406782. With respect to independent claim 1, Nakayama teaches a prediction model training method, which is performed by a server, comprising: transmitting, to a plurality of training devices, a model to be trained by the plurality of training devices (see e.g., Abstract Para [42][43] and Claim 1 – “ each of the aggregators sends the semi-global machine learning model to the associated agents” ”each of the aggregators sends the semi-global machine learning model to the associated agents, and each of the agents updates the local machine learning model with the semi-global machine learning model received from the associated aggregator.”), wherein the model to be trained comprises feature extraction layers configured to extract user features and prediction layers configured to perform information prediction (see e.g., Para [12]-[16] [21]-[29][122][123]- “Models trained in advance with prepared training data Comparisons only against the static base model Static models are trained for limited set of real world scenarios. Single model is deployed”); classifying the plurality of training devices into at least one group based on the user features extracted by the training devices (see e.g., Para [32]-[40][145]- “multiple aggregators coupled to the communication network and each uniquely associated with the agents, each aggregator comprising a model collector collecting the local machine learning models from the associated agents”” The personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”); receiving, from the plurality of training devices, model parameters obtained by the respective training devices training the model to be trained (see e.g., Para [141]-[144), wherein the model parameters comprise first parameters corresponding to the feature extraction layers and second parameters corresponding to the prediction layers (see e.g., Para [21]-[29][119]-[123]- “The network comprises three sequentially connected blocks of layers. (a) The embedding block 80 takes the process model state as an input and converts it into a common representation by accounting for the heterogeneity of process models. The embedding block 80 additionally predicts process variables that are not part of the process state but can be descriptive of the process performance and other metrics that an operator cares about. Such variables are determined by each agent's operator. (b) The inference block 90, whose parameters are aggregated through the federated learning process, uses the common representation of the input to produce an output. Since the inference block 90 is agnostic to process model variations, it can be generated by aggregating inference blocks 90 across the network. (c) The transfer block 100 converts the common representation of the output into an output value understood by the particular process model. The transfer block 100 can also predict process variables aside from ones assigned to the embedding block 80. Process variables are used by inference and transform blocks 80, 90 to calculate the output.”); performing global federated aggregation based on the first parameters to obtain a global federated aggregation result (see e.g., Fig. 10 Para [50][110][137]-[144]- “the system may further comprise a model repository storing the global machine learning models previously created by the system and meta-data indicating tasks used for training the respective global machine learning models”); performing intra-group federated aggregation for each of the at least one group, based on the second parameters of one or more of the plurality of training devices in a respective group, among each of the at least one group, to obtain an intra-group federated aggregation result (see e.g., Para [32]-[40][145] [144]- “this two model approach, for each agent we first randomly initialize the two models”); and transmitting, to each of the plurality of training devices, the global federated aggregation result and the intra-group federated aggregation result associated the respective group of the respective training device (see e.g., Para [32]-[40][145] [144]), so that the plurality of training devices update the first parameters of the feature extraction layers based on the global federated aggregation result and update the second parameters of the prediction layers based on the intra-group federated aggregation result (see e.g., Para [141]-[144]- “he personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”). With respect to dependent claim 3, Nakayama teaches the transmitting the model to be trained to the training devices comprises: transmitting first information corresponding to the feature extraction layers for extracting user features to the plurality of training devices; determining pre-trained groups of the plurality of training devices, respectively, based on a pre-trained grouping result; and transmitting, to the plurality of training devices, second information corresponding to the prediction layers based on the pre-trained groups (see e.g., Para [98][125] – “The group of cluster aggregators communicate with other group(s) of aggregators periodically to exchange their semi-global machine learning models to create a global machine learning model. This communication enables each user to utilize the training results of the users in other groups by receiving a most-updated AI model that approximates a consistent global AI model.”). With respect to dependent claim 6, Nakayama teaches the performing global federated aggregation based on the first parameters of the respective training devices comprises: weighted averaging the first parameters of the respective training devices to obtain the global federated aggregation result (see e.g., Para [137]-[142]). With respect to dependent claim 7, Nakayama teaches the performing intra-group federated aggregation on the second parameters of the respective training devices in the group in each of the at least one group comprises: weighted averaging the second parameters of the respective training devices in the group in each of the at least one group to obtain the intra-group federated aggregation result (see e.g., Para [137]-[142]). With respect to dependent claim 8, Nakayama teaches the training method further comprises: updating the grouping result (see e.g., Para [97]-[98]). With respect to dependent claim 9, Nakayama teaches the updating the grouping result comprises: calculating a similarity between each of the plurality of training devices and each of the at least one group, respectively; and updating the grouping result based on the similarity (see e.g., Para [135]-[138). With respect to dependent claim 11, Nakayama teaches repeatedly performing the operations of: receiving the model parameter, performing the global federated aggregation and the intra-group federated aggregation, and transmitting the global federated aggregation result and the intra-group federated aggregation result until end of training (see e.g., Para [144] – “hen a personalized model is obtained by combining the local model and the global model using the personalization rate, where the personalization rate measures the extent to which the personalized model mixes the local and the global models. Then the personalized model is tested to check whether a certain performance criteria is met. If the criteria is not met, the global model is updated using and a new round of training is started. This procedure repeats until the performance criterion is satisfied, in other words, the personalized model generalizes sufficiently well for the local dataset distribution. Finally, the personalized model for each agent is output.”). With respect to independent claim 12, Nakayama teaches a prediction model training method, which is performed by a server (see e.g., Para [7][97] – “The approximated global model created through this global model synthesis process is called a semi-global model. The term “cluster aggregator” or “CA” or “server” as used herein means a system that aggregates, via a communication network, artificial intelligence (AI) models that are trained at multiple agents (defined below) and creates a cluster machine learning model from the aggregated AI models. The aggregator serves as a federated learning (FL) server. The term “agent”, “device”, or “client” as used herein means a system with distributed learning environment such as local edge server, device, tablet, among others, in order to train machine learning models locally and send them to an associated aggregator.”), the method comprising: transmitting a model to be trained to a plurality of training devices, the model to be trained comprising feature extraction layers configured to extract user features and prediction layers configured to perform information prediction (see e.g., Para [12]-[16]- “Models trained in advance with prepared training data Comparisons only against the static base model Static models are trained for limited set of real world scenarios. Single model is deployed”); classifying the plurality of training devices into at least one group based on the user features extracted by the plurality of training devices and transmitting a grouping result to the plurality of training devices; receiving model parameters obtained by the plurality of training devices training the model to be trained, wherein the model parameters comprise first parameters corresponding to the feature extraction layers (see e.g., Para [32]-[40][145]- “multiple aggregators coupled to the communication network and each uniquely associated with the agents, each aggregator comprising a model collector collecting the local machine learning models from the associated agents”” The personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”); performing global federated aggregation on the first parameters of the respective training devices to obtain a global federated aggregation result (see e.g., Fig. 10 Para [50][110][137]-[144]- “the system may further comprise a model repository storing the global machine learning models previously created by the system and meta-data indicating tasks used for training the respective global machine learning models”); transmitting the global federated aggregation result to the plurality of training devices so that the plurality of training devices update the feature extraction layers based on the global federated aggregation result (see e.g., Para [141]-[144]- “he personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”). With respect to dependent claim 13, Nakayama teaches a prediction model training method, which is performed by a server (see e.g., Para [7][97] – “The approximated global model created through this global model synthesis process is called a semi-global model. The term “cluster aggregator” or “CA” or “server” as used herein means a system that aggregates, via a communication network, artificial intelligence (AI) models that are trained at multiple agents (defined below) and creates a cluster machine learning model from the aggregated AI models. The aggregator serves as a federated learning (FL) server. The term “agent”, “device”, or “client” as used herein means a system with distributed learning environment such as local edge server, device, tablet, among others, in order to train machine learning models locally and send them to an associated aggregator.”), comprising: receiving model parameters obtained by a plurality of training devices in a first group, among at least one group, training the model to be trained, the model to be trained comprising feature extraction layers configured to extract user features and prediction layers configured to perform information prediction (see e.g., Para [32]-[40][145]- “multiple aggregators coupled to the communication network and each uniquely associated with the agents, each aggregator comprising a model collector collecting the local machine learning models from the associated agents”” The personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”), and the model parameters comprise second parameters corresponding to the prediction layers (see e.g., Para [12]-[16]- “Models trained in advance with prepared training data Comparisons only against the static base model Static models are trained for limited set of real world scenarios. Single model is deployed”); performing intra-group federated aggregation on the second parameters of the plurality of training devices in the first group to obtain an intra-group federated aggregation result (see e.g., Para [32]-[40][145] [144]- “this two model approach, for each agent we first randomly initialize the two models”); and transmitting the intra-group federated aggregation result to the plurality of training devices in the first group so that the plurality of training devices update the prediction layers based on the intra-group federated aggregation result (see e.g., Para [141]-[144]- “he personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”). With respect to independent claim 14, Nakayama teaches a prediction model training method, which is performed by a training device, the method comprising: receiving a model to be trained from a server (see e.g., Abstract Para [7][97]- – “The approximated global model created through this global model synthesis process is called a semi-global model. The term “cluster aggregator” or “CA” or “server” as used herein means a system that aggregates, via a communication network, artificial intelligence (AI) models that are trained at multiple agents (defined below) and creates a cluster machine learning model from the aggregated AI models. The aggregator serves as a federated learning (FL) server. The term “agent”, “device”, or “client” as used herein means a system with distributed learning environment such as local edge server, device, tablet, among others, in order to train machine learning models locally and send them to an associated aggregator.”), the model to be trained comprising feature extraction layers configured to extract user features and prediction layers configured to information prediction (see e.g., Para [12]-[16] [21]-[29][122][123]- “Models trained in advance with prepared training data Comparisons only against the static base model Static models are trained for limited set of real world scenarios. Single model is deployed”); extracting a user feature using the feature extraction layers in the model to be trained, and transmitting the extracted user feature to the server to classify the training device into one of at least one group based on the user features (see e.g., Para [32]-[40][145]- “multiple aggregators coupled to the communication network and each uniquely associated with the agents, each aggregator comprising a model collector collecting the local machine learning models from the associated agents”” The personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”); training the model to be trained, and transmitting model parameters obtained by training to the server (see e.g., Abstract Para [7][21][97]- – “The approximated global model created through this global model synthesis process is called a semi-global model. The term “cluster aggregator” or “CA” or “server” as used herein means a system that aggregates, via a communication network, artificial intelligence (AI) models that are trained at multiple agents (defined below) and creates a cluster machine learning model from the aggregated AI models. The aggregator serves as a federated learning (FL) server. The term “agent”, “device”, or “client” as used herein means a system with distributed learning environment such as local edge server, device, tablet, among others, in order to train machine learning models locally and send them to an associated aggregator.”), wherein the model parameters comprise first parameters corresponding to the feature extraction layers and second parameters corresponding to the prediction layers (see e.g., Para [21]-[29][122][123]- “The network comprises three sequentially connected blocks of layers. (a) The embedding block 80 takes the process model state as an input and converts it into a common representation by accounting for the heterogeneity of process models. The embedding block 80 additionally predicts process variables that are not part of the process state but can be descriptive of the process performance and other metrics that an operator cares about. Such variables are determined by each agent's operator. (b) The inference block 90, whose parameters are aggregated through the federated learning process, uses the common representation of the input to produce an output. Since the inference block 90 is agnostic to process model variations, it can be generated by aggregating inference blocks 90 across the network. (c) The transfer block 100 converts the common representation of the output into an output value understood by the particular process model. The transfer block 100 can also predict process variables aside from ones assigned to the embedding block 80. Process variables are used by inference and transform blocks 80, 90 to calculate the output.”); receiving a global federated aggregation result and an intra-group federated aggregation result from the server (see e.g., Para [141]-[144]- “the personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”), wherein the global federated aggregation result is obtained by the server performing global federated aggregation on the first parameters of the respective training devices, and the intra-group federated aggregation result is obtained by the server performing intra-group federated aggregation on the second parameters of the respective training devices in the corresponding group (see e.g., Para [21]-[29][122][123]- “The network comprises three sequentially connected blocks of layers. (a) The embedding block 80 takes the process model state as an input and converts it into a common representation by accounting for the heterogeneity of process models. The embedding block 80 additionally predicts process variables that are not part of the process state but can be descriptive of the process performance and other metrics that an operator cares about. Such variables are determined by each agent's operator. (b) The inference block 90, whose parameters are aggregated through the federated learning process, uses the common representation of the input to produce an output. Since the inference block 90 is agnostic to process model variations, it can be generated by aggregating inference blocks 90 across the network. (c) The transfer block 100 converts the common representation of the output into an output value understood by the particular process model. The transfer block 100 can also predict process variables aside from ones assigned to the embedding block 80. Process variables are used by inference and transform blocks 80, 90 to calculate the output.”); and updating the feature extraction layers based on the global federated aggregation result, and updating the prediction layers based on the intra-group federated aggregation result (see e.g., Para [141]-[144]- “the personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Nakayama. With respect to dependent claim 2, Nakayama teaches acquiring user device information from a plurality of user devices; and selecting the plurality of training devices from the plurality of user devices based on the user device information (see e.g., Para [32]-[49][97][144][145] – “The term “agent”, “device”, or “client” as used herein means a system with distributed learning environment such as local edge server, device, tablet, among others, in order to train machine learning models locally and send them to an associated aggregator.” “multiple aggregators coupled to the communication network and each uniquely associated with the agents, each aggregator comprising [0038] a model collector collecting the local machine learning models from the associated agents; [0039] a memory storing the collected local machine learning models; and [0040] a processor creating a cluster machine learning model from the collected local machine learning models,” Nakayama does not expressly show acquiring information from user devices. However, it would have been obvious to include this feature, because Nakayama requires the determination of “association” between certain devices to form group). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Nakayama in view of Choe et al. (hereinafter Choe) U.S. Patent Publication No. 2021/0250510. With respect to independent claim 15, Nakayama teaches an information prediction method, which is executed by a user device, the method comprising: receiving parameters of feature extraction layers of a prediction model and first central point information corresponding to a first group (see e.g., Para [141]-[144]- “the personalization can be extendedly interpreted as a model aggregation for each group of users who share a similar behavioral pattern. The group-level model management and preparation virtually cluster all the users into multiple groups by incorporating a feature vector-based clustering method. This enables the customization and advanced control of ML models distributed by aggregators for different types of users.”), among at least one group, the feature extraction layers configured to extract user features, and the first information representing an average user feature of user devices within the first group (see e.g., Para [137] [138] [142] - “The standard federated learning typically assumes that all user's data come from a similar distribution so that every single agent can benefit from other's data by participating in the federated learning process. However, if the distribution of an agent's dataset drifts far away from the average distribution among all the other agents, the global model trained from federated learning might be ineffective to this agent. To resolve this problem, it is necessary to find a way to better utilize the generalization ability of the global model while not compromising the model performance for the local distribution. This motivates an introduction of the personalization module in the system of the present disclosure.”) obtaining the prediction model corresponding to the user device based on the feature extraction layers, the first central point information and user data of the user device (see e.g., Fig. 4 and Para [110]-[125]); and predicting information using the obtained prediction model (see e.g., Para [122]-[123]). Nakayama does not expressly show the first center point information. However, Nakayama expressly indicates that average value is used to determine drift. Furthermore, Choe teaches similar feature (see e.g., Para [80]-[84] – “The electronic device 101 obtains a center point of the extracted features and the weight moving average is used to smooth the coordinates along the temporal image sequence 616”). Both Nakayama and Choe are directed to machine learning methods. Accordingly, it would have been obvious to the skilled artisan before the effective filing date of the claimed invention having Nakayama and Choe in front of them to modify the system of Nakayama to include the above feature. The motivation to combine Nakayama and Choe comes from Choe. Choe discloses the motivation to determine a center point for feature extraction so that boundary can be defined for features (see e.g., Para [80]-[82]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEIYONG WENG whose telephone number is (571)270-1660. The examiner can normally be reached on Mon.-Fri. 8 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Matthew Ell, can be reached on (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /PEI YONG WENG/ Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Mar 13, 2023
Application Filed
Jan 11, 2026
Non-Final Rejection — §101, §102, §103
Mar 23, 2026
Interview Requested
Mar 31, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602594
DIRECTED TRAJECTORIES THROUGH COMMUNICATION DECISION TREE USING ITERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Apr 14, 2026
Patent 12579468
TRAINING DATA SCREENING DEVICE, ROBOT SYSTEM, AND TRAINING DATA SCREENING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572845
INTELLIGENT MACHINE-LEARNING MODEL CATALOG
2y 5m to grant Granted Mar 10, 2026
Patent 12561608
APPARATUS AND METHODS FOR PREDICTING SLIPPING EVENTS FOR MICROMOBILITY VEHICLES
2y 5m to grant Granted Feb 24, 2026
Patent 12555665
HOME EXERCISE PLAN PREDICTION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+23.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 637 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month