Prosecution Insights
Last updated: April 19, 2026
Application No. 17/859,085

METHOD OF TRAINING NEURAL NETWORK MODEL FOR CALCULATING LEARNING ABILITY AND METHOD OF CALCULATING LEARNING ABILITY OF USER

Final Rejection §103
Filed
Jul 07, 2022
Examiner
DIEP, DUY T
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Socra AI Inc.
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
30%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
5 granted / 20 resolved
-30.0% vs TC avg
Moderate +6% lift
Without
With
+5.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
59
Total Applications
across all art units

Statute-Specific Performance

§101
34.1%
-5.9% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The arguments filed 11/04/2025 have been entered. Claims 1, 3, 6-8 remain pending in the application. Applicant’s amendment, with respect to claim rejections of claims 1, 3, 6-8 under 35 U.S.C 101 filed 06/04/2025 have been considered and they are persuasive. Therefore, the previous rejections as set forth in the previous office action will be removed. Applicant’s amendment and argument, with respect to claim rejections of claims 1, 3, 6-8 under 35 U.S.C 103 filed 06/04/2025 have been considered and they are not persuasive. Therefore, the previous rejections as set forth in the previous office action will be maintained. The applicant argues that the combination of Higgin and Brown fails to teach or suggest the limitations of amended independent claim 1, specifically the “random selecting” and “matching” steps. Regarding the random selection limitation, the applicant contends that Brown’s disclosure of a “random test design” at paragraph 46 is merely a front-end administrative process used to determine the scheme for task assignment to students, which they assert is fundamentally distinct from the claimed step of randomly selecting existing answer data from a stored “answer set”. The applicant further maintains a “temporal impossibility” argument, stating that in Brown’s system, the “answer information” does not yet exist at the time the random design is implemented; therefore, the system cannot be “selecting” from a “plurality of answer information” that has not yet been generated or stored. Additionally, the applicant argues that the references fail to teach or suggest “matching” such randomly selected data with “score information” specifically to “generate the answer sequence” for the purpose of training an artificial neural network. Finally, the applicant maintains that the cited prior art does not teach the specific structural relationship of using these matches sequences for neural network training, asserting that Brown’s “observables” are isolated assessment results intended for student evaluation rather than a curated dataset for machine learning as required by the claim. The applicant’s argument has been considered but are found to be unpersuasive. Under the Broadest Reasonable Interpretation, the term “randomly selecting” is a functional limitation and is not restricted to a specific temporal window (e.g., selecting only from data archived in a database prior to the start of a process). Brown explicitly teaches a storage device 114 containing a “plurality of tasks” and a “set of potential responses” in paragraph 29 “ The storage device 114 may also store a plurality of information items related to the learning system 100, such as information related to a plurality of tasks ... A task may include a question that elicits a simple response, such as a selection of an answer to the question ... Accordingly, information related to a plurality of tasks 124 may include, without limitation, ..., a set of potential responses”, which constitutes a “plurality of answer information” in an “answer set”. When Brown applied a “random” design to determine which of these tasks – and their corresponding answer data – are presented and processed, the system is performing a stochastic selection from that plurality. A randomized mechanism that dictates which data points are included for processing is the functional equivalent of “randomly selecting” that data. In Brown, the “answer set” consists of a plurality of potential response stored in the task data storage. By randomly design the test, the system inherently dictates which specific subset of question-answer data is pulled from the storage and processed in a random manner. Whether the system picks a data point directly from a list in a random design or uses a randomized algorithm to determine which data point is retrieved, the technical result is the same: a piece of answer information has been randomly selected from a plurality of available data. Furthermore, the applicant’s contention regarding the lack of “matching” is misplaced. Brown teaches that scoring a response yields an “observable” such as a percentage grade or a Boolean value (paragraph 29 “Scoring the simple response, for example, as either correct or incorrect, yields a single independent observation, or observable ... an observable may also refer to a plurality of observables, e.g., a vector or array of scored responses.”) One of ordinary skilled in the art would understand that the act of scoring a specific piece of answer data inherently requires the system to associate or match that data with its corresponding score, otherwise the “observable” result could not be logically generated or attributed to the specific task. Further the observable may be referred to as a vector or array of scored responses, thus indicate the matching of a response/answer to a score to compute a vector or array, which is analogous to the “generated sequence”, as claimed. Regarding the neural network training, it would have bene obvious to one ordinary skilled in the art to utilize the randomized test design with random question-answer-score vector or array data of Brown to provide the labeled training data required for the neural network model in Higgin, especially as Higgin discloses a neural network training to perform automated scoring on new constructed responses that need to be scored. Using a randomized dataset is a well-known technique in machine learning to reduce bias and improve model generalization. The use of Brown’s data vector/array of scored responses based on the randomized test design to feed the training process of Higgin’s model is a combination of prior art elements according to known methods to yield a predictable result, since Higgin neural network model need to be efficiently trained to be able to provide a predicted score for responses. Consequently, the combination of Brown in view of Higgins teaches elements of the amended claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over Brown et.al (US 20160217701 A1) further in view of Higgins et.al (US 20150248608 A1) Regarding claim 1, Brown teaches the limitation “training an artificial neural network model which calculates a learning ability and is applied to a first assessment system for assessing a learning ability of a target user in real time according to an answer of the target user at a first time point,” (paragraph 34 “Appropriate libraries may use an open implementation protocol that supports many different paradigms including, without limitation, such as Bayesian models, machine learning algorithms, likelihood-based algorithms, and neural network algorithms”, paragraph 35 “the assessment agent 122 may select tasks, administer a test, receive responses, generate observables, and evaluate the ability of a student with respect to the difficulty of a question or the content of a question.” Brown discloses a system and method for real-time analysis and guidance of learning, wherein the method further comprises implementation of neural network algorithms as well as functions to evaluate learning ability of students in real time based on questions and answers.) Brown teaches the limitation “acquiring an assessment database including data, which includes question information answered by a user at a second time point earlier than the first time point, answer information of the user to the question information, and score information of the user in a second assessment system, acquired from the second assessment system different from the first assessment system,” (paragraph 29 “The storage device 114 may also store a plurality of information items related to the learning system 100, such as information related to a plurality of tasks (tasks 124) ... A task may include a question that elicits a simple response, such as a selection of an answer to the question. Scoring the simple response, for example, as either correct or incorrect, yields a single independent observation, or observable. Accordingly, information related to a plurality of tasks 124 may include, without limitation, a question, the difficulty of the question, a set of potential responses, a correct answer, the time at which the question was taken”, paragraph 39 “While the learning systems 100, 200 are described above as separate embodiments, various embodiments of learning systems 100, 200 may combine or interchange components to form various learning systems according to the disclosure.”, and paragraph 72 “The method 500 is performed in the context of a series of the K individual time points [t1, . . . , tk, . . . , tK], wherein at each time point tK a task is administered to a student for completion.” Brown discloses a storage device to store data, suggesting the assessment databased within the claim. Brown discloses the data stored include information related to a plurality of tasks, wherein a task may include a question and an answer response, wherein each task might be performed at each time point. Brown also discloses a score for each response to the question such as a correct or incorrect score. The learning system 100 by Brown suggests the second assessment system within the claim. The learning system by Brown is different from the system by Higgins, wherein the system by Higgins suggests the first assessment system within the claim based on the teaching combination below.) Brown teaches the limitation “generating an answer sequence from the assessment database by matching the answer information with the score information and obtaining a training set based on the generated sequence, wherein generating of the answer sequence comprises (i) acquiring an answer set from the assessment database, (ii) acquiring the score information related to the answer set, (iii) randomly selecting at least one piece of answer data from among a plurality of pieces of answer information included in the answer set, and (v) generating the answer sequence by matching the randomly selected at least one piece of answer data with the score information” (paragraph 29 “A task may include a question that elicits a simple response, such as a selection of an answer to the question. Scoring the simple response, for example, as either correct or incorrect, yields a single independent observation, or observable. ... Accordingly, information related to a plurality of tasks 124 may include, without limitation, ..., a set of potential responses ... an observable may also refer to a plurality of observables, e.g., a vector or array of scored responses.”, paragraph 42 “A task may comprise a question 304 that elicits a simple response 306 from a student or examinee, such as a selection of an answer to the question 304. Subsequent processing and scoring of the response 306 (i.e., as either correct or incorrect) yields a single observation, or observable 308. The observable 308 may be used for various purposes, including for estimating the difficulty of the question, estimating the ability of the student, grading student performance on the test 302, and for simply collecting information about the student.” and paragraph 46 “the scheme determining which student receives which task(s) is called the design of the test 302. ... The design of a test 302 may also be random”. Brown explicitly teaches a storage device (assessment database) that may store a plurality of information items, including a set of potential responses (answer set). Brown further discloses scoring responses to obtain observables (score information) and performing tests to obtain observables, in which the design of the test is random. Under the Broadest reasonable interpretation, a randomized design that dictates which data points (question-response) from a set of potential responses are included for processing is the functional equivalent of “randomly selecting” that data. By utilizing the random test design, the system inherently dictates which specific question-response data is pulled from the storage device and processed in a random manner, thus teaching randomly selecting one piece of answer data from among a plurality of pieces of answer information, as claimed. Finally, Brown teaches that these observables may be referred to as vector or array of scored responses, which constitutes generating the answer sequence by matching the randomly selected at least one piece of answer data with the score information, as claimed. A person of ordinary skilled in the art would have been motivated to use such vector or array as training data to train a machine learning model, as Higgins teaches a training phase where scores and responses are required to train a neural network model. Combining Brown’s data sampling with the model training in Higgins would have been a predictable design choice to improve the automated scoring of constructed responses.) Brown does not teach the limitation “obtaining the artificial neural network model for calculating score information of the user in the second assessment system on the basis of the answer information in the second assessment system”. However, Higgins teaches this limitation (paragraph 21 “an example computer-based system for automatically scoring a constructed response 102 generated by a user”, paragraph 38 “The convolutional neural network may be configured to generate multiple scores for the constructed response. ... For example, a given item may include a prompt that requests that the user generate a constructed response that explains the process of osmosis. The constructed response may be scored using the convolutional neural network to generate the multiple scores for the constructed response”, and paragraph 55 “The model includes an input layer configured to receive a plurality of numerical vectors that is representative of a constructed response”. Higgins discloses a computer-based system which utilizes deep convolutional neural networks for automated scoring of constructed responses. Within the disclosure, Higgins discloses a convolutional neural network configured to generate scores for the constructed response, wherein the response may be one or more responses as disclosed by Brown based on the teaching combination below.) Brown does not teach the limitation “training the artificial neural network model with the training set”. However, Higgins teaches this limitation (paragraph 34 “such conventional techniques for automated scoring generally include a “training” phase whereby (i) human-engineered features are extracted from human-scored responses, and (ii) the extracted features and the scores assigned to the responses are used to train a scoring model using a machine-learning application”. Higgins discloses training the neural network in the training phase using training data such as responses and scores assigned to the responses, wherein responses and scores may be vector or array of scored responses obtained from Brown’s disclosure above based on the teaching combination below.) Brown does not teach the limitation “an input layer configured to receive the answer sequence” However, Higgins teaches this limitation (paragraph 40 “Input data is received at nodes of an input layer of the convolutional neural network”, and paragraph 55 “The model includes an input layer configured to receive a plurality of numerical vectors that is representative of a constructed response” Higgins discloses the convolutional neural network comprises of an input layer configured to received input data of numerical vector that is representative of a constructed response.) Brown does not teach the limitation “an output layer configured to output a result representing a score value” However, Higgins teaches this limitation (paragraph 39 “further, in other examples, the convolutional neural network generates a single score”, and paragraph 40 “... and the data passes through the convolutional neural network, layer-by-layer, until the data arrives at the nodes of the output layer”. Higgins discloses the convolutional neural network comprises of an output layer that generate the output of a score for the response input.) Brown does not teach the limitation “a hidden layer having a plurality of nodes connecting the input layer and the output layer” However, Higgins teaches this limitation (paragraph 40 “The layers of the convolutional neural network include ... one or more hidden layers.”, paragraph 42 “A hidden layer of nodes of the convolutional neural network may be configured to receive inputs from the convolution layer of the network via a second plurality of connections”, and paragraph 44 “The output layer of the convolutional neural network is connected to a top-most hidden layer of the network via a third plurality of connections” Higgins discloses one or more hidden layers with a plurality of nodes of the convolutional neural network that was connected via a plurality of connections with the convolutional layer which has a preceding input layer, and connections with the output layer, suggesting the one or more hidden layers having a plurality of nodes with various connections connecting and represent the passing of data from the input layer to the output layer.) Brown does not teach the limitation “wherein the training of the artificial neural network model comprises: adjusting weights of the nodes on the basis of a difference between the score value and the score information included in the answer sequence” However, Higgins teaches this limitation (paragraph 31 “Thus, for example, the model generation module 106 may parse each of the human-scored constructed responses 114”, and paragraph 32 “In an example, the weights of the model are determined using an optimization procedure for training convolutional neural networks ... in an example, values for the weights are iteratively modified in order to reduce a loss function associated with scoring accuracy, such as the root-mean-squared error.” Higgins discloses iteratively modify value for the weights of the nodes in order to reduce a loss function associated with scoring accuracy, wherein a person ordinary skilled in the art would have been able to modify the weight associated with the scoring accuracy based on the difference between the scores generated by the neural network as disclosed above and the human-scored using root mean squared error technique, which is a technique as known in the art as a mean to measure the average difference of the actual data points from the predicted values.) Before the effective filing date, it would have been obvious to a person ordinary skilled in the art to incorporate the teaching of a system and method for real-time analysis and guidance of learning by Brown with the teaching of deep convolutional neural networks for automated scoring of constructed responses by Higgins. The motivation to do so is referred to in Higgins’s disclosure (paragraph 76 “It should be appreciated that the scoring of constructed responses using the network 400 applies a “deep learning” approach, which reduces or eliminates the need for manual engineering of scoring features by humans. Specifically, applying the network 400 to predict a score for a constructed response does not involve the extraction of human-engineered features from the constructed response. Instead, during the supervised training step, the convolutional neural network 400 itself identifies important characteristics of human-scored reference responses that are related to the classifications or scores assigned to the reference responses by human graders.”, paragraph 71 “The use of the unsupervised pre-training step prior to the supervised training of the network 400 using the human-scored responses may allow for more efficient convergence of the convolutional neural network model.” Higgins discloses using a convolutional neural network to generate score for constructed responses, which reduces or eliminates the need for manual engineering of scoring features by humans and does not involve the extraction of human-engineered features. Furthermore, Higgins provide training steps and structure to perform the training of the neural network model such as using unsupervised pre-training step prior to the supervised training of the network to allow for more efficient convergence. While Brown discloses implementation of a neural network, Brown does not recite how to implement such neural network, however, Higgins provide methods of training a convolutional neural network and using the convolutional neural network to generate score for responses. A person of ordinary skilled in the art would have been motivated to use such vector or array by Brown as training data to train a machine learning model, as Higgins teaches a training phase where scores and responses are required to train a neural network model. Combining Brown’s data sampling with the model training in Higgins would have been a predictable design choice to improve the automated scoring of constructed responses. The learning system by Brown, which suggest the second assessment system within the claim, may incorporate the convolutional neural network as taught by Higgins to perform its training, then the trained convolutional neural network may be applied again within the system by Higgins, which suggest the first assessment system within the claim.) Regarding claim 3 depends on claim 1, thus the rejection of claim 1 is incorporated. Higgins teaches the limitation “inputting the answer sequence to the input layer using the training set” (paragraph 40 “Input data is received at nodes of an input layer of the convolutional neural network”, and paragraph 55 “The model includes an input layer configured to receive a plurality of numerical vectors that is representative of a constructed response” Higgins discloses the convolutional neural network comprises of an input layer configured to received input data of numerical vector that is representative of a constructed response.) Higgins teaches the limitation “acquiring the score value output through the output layer” (paragraph 39 “further, in other examples, the convolutional neural network generates a single score”, and paragraph 40 “... and the data passes through the convolutional neural network, layer-by-layer, until the data arrives at the nodes of the output layer” Higgins discloses the convolutional neural network comprises of an output layer that generate the output of a score for the response input) Regarding claim 6, Higgins teaches the limitation “acquiring question information in the first assessment system and answer information of a target user to questions” (paragraph 21 “an example computer-based system for automatically scoring a constructed response 102 generated by a user ... In an example, the constructed response 102 is a textual response that is provided by the user in response to a given item (e.g., a test question, task, etc.)” Higgins discloses a system which include a step of obtaining a constructed textual response from a given item such as a question, wherein this system may be configured by a person ordinary skilled in the art as a different system from the learning system by Brown.) Higgins teaches the limitation “acquiring target score information of the target user in the first assessment system using an artificial neural network model which calculates score information of a reference user in a second assessment system different from the first assessment system on the basis of answer information of the reference user to the questions in the second assessment system, wherein the artificial neural network model comprises” (paragraph 38 “The convolutional neural network may be configured to generate multiple scores for the constructed response. ... For example, a given item may include a prompt that requests that the user generate a constructed response that explains the process of osmosis. The constructed response may be scored using the convolutional neural network to generate the multiple scores for the constructed response” Higgins discloses the system comprises of a convolutional neural network configured to generate scores for the constructed response to a question, wherein the neural network may have been configured to be used within the learning system by Brown based on the teaching combination as disclosed above.) Higgins teaches the limitation “an input layer configured to receive the target answer information of the target user in the first assessment system” (paragraph 40 “Input data is received at nodes of an input layer of the convolutional neural network”, and paragraph 55 “The model includes an input layer configured to receive a plurality of numerical vectors that is representative of a constructed response” Higgins discloses the convolutional neural network comprises of an input layer configured to received input data of numerical vector that is representative of a constructed response.) Higgins teaches the limitation “an output layer configured to output the target score information including a score value of the target user in the first assessment system” (paragraph 39 “further, in other examples, the convolutional neural network generates a single score”, and paragraph 40 “... and the data passes through the convolutional neural network, layer-by-layer, until the data arrives at the nodes of the output layer” Higgins discloses the convolutional neural network comprises of an output layer that generate the output of a score for the response input) Higgins teaches the limitation “a hidden layer having a plurality of nodes connecting the input layer and the output layer, and the artificial neural network model is trained by adjusting weights of the plurality of nodes with a training set including the answer information of the reference user in the second assessment system and the score information of the reference user in the second assessment system” (paragraph 31 “Thus, for example, the model generation module 106 may parse each of the human-scored constructed responses 114”, and paragraph 32 “In an example, the weights of the model are determined using an optimization procedure for training convolutional neural networks ... in an example, values for the weights are iteratively modified in order to reduce a loss function associated with scoring accuracy, such as the root-mean-squared error.”, paragraph 40 “The layers of the convolutional neural network include ... one or more hidden layers.”, paragraph 42 “A hidden layer of nodes of the convolutional neural network may be configured to receive inputs from the convolution layer of the network via a second plurality of connections”, and paragraph 44 “The output layer of the convolutional neural network is connected to a top-most hidden layer of the network via a third plurality of connections” Higgins discloses one or more hidden layers with a plurality of nodes of the convolutional neural network that was connected via a plurality of connections with the convolutional layer which has a preceding input layer, and connections with the output layer, suggesting the one or more hidden layers having a plurality of nodes with various connections connecting and represent the passing of data from the input layer to the output layer. Higgins discloses iteratively modify value for the weights of the nodes in order to reduce a loss function associated with scoring accuracy, wherein a person ordinary skilled in the art would have been able to modify the weight associated with the scoring accuracy based on the difference between the scores generated by the neural network as disclosed above and the human-scored using root mean squared error technique, which is a technique as known in the art as a mean to measure the average difference of the actual data points from the predicted values.) The applicant is further directed to the rejection of claim 1, because claim 6 recites similar limitations and processing steps to claim 1, thus the claim is rejected under the same rationale. The motivation to combine the teaching of Brown with the teaching of Higgins is similar to the motivation as recited in claim 1, such that the neural network by Higgins incorporated into the learning system by Brown as part of the teaching combination above to obtain a trained neural network may then be further applied for another system such as the system disclosed by Higgins. Regarding claim 7, Higgins teaches the limitation “A non-transitory computer-readable recording medium in which a computer program executed by a computer is recorded” (paragraph 91 “A non-transitory processor-readable storage medium, such as read only memory (ROM) 756 and random access memory (RAM) 758, may be in communication with the processing system 754 and may contain one or more programming instructions for performing the method”. Higgins discloses a non-transitory processor-readable storage medium in communication with the processing system and contain one or more programming instructions for performing the method.) The applicant is further directed to the rejection of claim 1, because claim 7 recites similar limitations and processing steps to claim 1, thus the claim is rejected under the same rationale. Regarding claim 8, Higgins teaches the limitation “A non-transitory computer-readable recording medium in which a computer program executed by a computer is recorded” (paragraph 91 “A non-transitory processor-readable storage medium, such as read only memory (ROM) 756 and random access memory (RAM) 758, may be in communication with the processing system 754 and may contain one or more programming instructions for performing the method”. Higgins discloses a non-transitory processor-readable storage medium in communication with the processing system and contain one or more programming instructions for performing the method.) The applicant is further directed to the rejection of claim 6, because claim 8 recites similar limitations and processing steps to claim 6, thus the claim is rejected under the same rationale. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY TU DIEP whose telephone number is (703)756-1738. The examiner can normally be reached M-F 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY T DIEP/Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Jul 07, 2022
Application Filed
May 30, 2025
Non-Final Rejection — §103
Nov 04, 2025
Response Filed
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579428
METHOD FOR INJECTING HUMAN KNOWLEDGE INTO AI MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12488223
FEDERATED LEARNING FOR TRAINING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 02, 2025
Patent 12412129
DISTRIBUTED SUPPORT VECTOR MACHINE PRIVACY-PRESERVING METHOD, SYSTEM, STORAGE MEDIUM AND APPLICATION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
30%
With Interview (+5.5%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month