Prosecution Insights
Last updated: April 19, 2026
Application No. 17/701,650

KNOWLEDGE TRANSFER IN COLLABORATIVE LEARNING

Final Rejection §103
Filed
Mar 22, 2022
Examiner
PATEL, LOKESHA G
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
4y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
56 granted / 74 resolved
+20.7% vs TC avg
Strong +38% interview lift
Without
With
+38.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
20 currently pending
Career history
94
Total Applications
across all art units

Statute-Specific Performance

§101
29.5%
-10.5% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The present application was filed on 03/22/2022. This action is in response to amendments and remarks filed on 12/11/2025. In the current amendments, claims 1, 4, 8, 11, 15 and 18 have been amended, claim 21 has been added, no claims were canceled. Claims 1-21 are pending and have been examined. Claims 1, 8 and 15 are independent claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/13/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Examiner’s Note: Independent claim 15 recites “One or more computer storage devices having computer-executable instructions stored thereon, which, upon execution by a computer, cause the computer to perform operations”, and each of claims 16-20 depend directly or indirectly from claim 15, respectively, and each recite “The one or more computer storage devices of claim 15” (or intervening claim 18 in the case of claim 19). According to the original speciation of the applicant, the utilization of computer storage devices is limited to a non-transitory computer-readable storage device [i.e., Paragraph 73 “Computer storage media are tangible and mutually exclusive to communication media. Computer storage media are implemented in hardware and exclude carrier waves and propagated signals. Computer storage media for purposes of this disclosure are not signals per se”]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-21 are rejected under 35 U.S.C. 103 as being unpatentable over Aslan (US20170132528A1) in view of Tsouvalas (“Federated Self-Training for Semi-Supervised Audio Recognition”). Claim 1. Aslan teaches a system comprising: a processor; and a computer-readable medium storing instructions that are operative upon execution by the processor to (Para [0095] “One or more computer-readable storage media (e.g., RAM, ROM, EEPROM, flash memory, etc.) storing computer-executable instructions that, when executed by a processor (e.g., central processing unit (CPU), a field programmable gate array (FPGA), a complex programmable logic device (CPLD), an application specific integrated circuit (ASIC), a system-on-chip (SoC), etc.)” teaches a processor and a storage medium): receive, at a primary node, from a plurality of remote nodes, a plurality of trained proxy machine learning (ML) models, wherein each proxy ML model is received from a different one of the plurality of remote nodes, and wherein each of the plurality of remote nodes is remote across a network from the primary node (Para [0021] “the first machine learning model 100 is denoted as a “teacher machine learning model” or “teacher model,” and the second machine learning model 102 is denoted as a “student machine learning model” or “student model.”” and Para [0031] “ knowledge can be bi-directionally transferred between the first model 100 and the second model 102 during joint training, as depicted visually in FIG. 1 by path 110 between the first model 100 and the second model 102” and Para [0051] “FIG. 3 is a schematic diagram of another example technique for joint training of multiple machine learning models. In the example of FIG. 3, a teacher model 300 can be trained in parallel with M student models 302, shown as student models 302(1), 302(2), . . . , 302(M)” Figures 1, 2 and 3 teach receiving an update from a student model 302 as Proxy ML models, each student model could represent a version of the ML model that is trained on specific data subset at the remote node (corresponds to remote node of proxy model) for a teacher model (corresponds to primary node) of machine learning model, each student model receives updates from the teacher model (corresponds to a primary node), which corresponds to remote connection between the student model), and train a primary ML model using the plurality of proxy ML models, wherein training the primary ML model comprises (Para [0047] “each of the teacher models 200 can influence the training of the student model 202, and vice versa, during joint training. Each of the N teacher models 200 is also shown as receiving corresponding training data 204(1)-(N)” teaches training a teacher model using student model (Proxy model)), for each respective training case of training cases of a primary training dataset (Para [0032] “the two models 100 and 102 collaborate with each other during joint training (shown via the path 110 in FIG. 1), the models 100 and 102 can process any suitable unlabeled data” teaches for each of the training dataset is unlabeled data/ a primary training dataset). and updating parameters of the primary ML model based at least on the respective label (Para [0064] “At 704, the objective function can be optimized in order to train the multiple machine learning models in parallel. For example, model parameters (e.g., weight parameters) can be determined that optimize (e.g., minimize) the objective function generated at 702. Once trained, the models can be used to generate expected output from unknown input, such as a class label for an unknown image” teaches determined the parameter of the teacher model based on the label). Aslan does not explicitly teach determining a respective label for the respective training case by weighting results from each respective proxy ML model of the plurality of proxy ML models based on at least a respective confidence of the respective proxy ML model regarding the respective training case. However, in the same field, analogous art Tsouvalas teaches determining a respective label for the respective training case by weighting results from each respective proxy ML model of the plurality of proxy ML models based on at least a respective confidence of the respective proxy ML model regarding the respective training case (4.3 Federated Self-training & Page 9 “a student model can be constructed on each client device by collectively training on labeled and pseudo-labeled data, the weights of whom will be returned to the server for aggregation… In such settings, teacher models may produce inaccurate pseudo-label predictions, and student classifiers potentially amplify the mistakes further during training through using faulty pseudo-labels. To avoid such scenarios, the confidence of the predictions is taken into consideration when generating pseudo-labels to discard low-confidence predictions” teaches determining a respective label for the unlabeled data by weight from the student model/proxy ML model based on the confidence of the prediction of the unlabeled data). Aslan and Tsouvalas are analogous art because both describe distributed training of multiple machine learning models that share information to improve learning performance. It would have been obvious for one of ordinary skill in the arts before the effective filing date of the claimed invention to incorporate the limitation(s) above as taught by Tsouvalas to the disclosed invention of Aslan. One of ordinary skill in the arts would have been motivated to make this modification because of the following, using confidence based self training approach, such as FedSTAR, effectively exploits large scale unlabeled distributed data in a federated setting by generating reliable pseudo-labels with an adaptive confidence threshold, improving model generalization and convergence, similar to Aslan, as suggested by Tsouvalas (Tsouvalas, INTRODUCTION, Page 4). Claim 2. As discussed above, Aslan in view of Tsouvalas teaches the system of claim 1, wherein the instructions are further operative to: Aslan further teaches perform an ML task with the trained primary ML model (Para [0025] “For example, the first model 100 can be trained to infer a set of probabilities for a multi-label classification task based on unknown image data received as input, and the second model 100 can be trained to classify the unknown image data as one of multiple possible class labels, but does not infer a set of probabilities as output” teaches training first ML model (primary ML model)). Claim 3. As discussed above, Aslan in view of Tsouvalas teaches the system of claim 1, Aslan further teaches wherein at least two of the proxy ML models have different architectures from each other (Para [0052] “The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size” teaches student model (Proxy ML) differ in architecture) and at least one of the proxy ML models has a different architecture than the primary ML model (Para [0045] “the first model 100 and the second model 102 can differ in their architectures—the first model 100 can comprise a deep neural net (DNN) and the second model 102 can comprise a boosted decision tree—with one having a computational advantage over the other in a given scenario” teaches first model (primary model) and second model (Proxy ML model) can differ in their architecture). Claim 4. As discussed above, Aslan in view of Tsouvalas teaches the system of claim 1, wherein the instructions are further operative to: Aslan further teaches train each of the proxy ML models with the trained primary ML model (Para [0055] “each of the teacher models 500 can influence the training of the student model 502, and vice versa, during joint training” and Figure 3 and 5. teaches training student model (proxy ML model) with the trained teacher model); deploy the plurality of trained proxy ML models to the plurality of remote nodes for further training, wherein each proxy ML model is deployed to a different one of the plurality of remote nodes (Para [0052] “FIG. 3 also shows that training data 304 can be used to train one or more of the machine learning models of FIG. 3, such as the teacher model 300. It is to be appreciated that one or more of the student models 302 can also be trained with at least a portion of the training data 304. The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size…3 also shows that training data 304 can be used to train one or more of the machine learning models of FIG. 3, such as the teacher model 300. It is to be appreciated that one or more of the student models 302 can also be trained with at least a portion of the training data 304. The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size” teaches training student model (proxy ML model) wherein each the student model trained differently); receive, at the primary node, from the plurality of remote nodes, the further-trained plurality of proxy ML models (Para [0031] “knowledge can be bi-directionally transferred between the first model 100 and the second model 102 during joint training, as depicted visually in FIG. 1 by path 110 between the first model 100 and the second model 102” Figure 1, 2 and 3 teaches knowledge transfer between first model (primary model) and second model (proxy ML models) ); and further train the primary ML model using the plurality of proxy ML models (Para [0055] “each of the teacher models 500 can influence the training of the student model 502, and vice versa” Figure 1, 2 and 3 teaches training teacher model (Primary ML model) using the student model (proxy ML models)). Claim 5. As discussed above, Aslan in view of Tsouvalas teaches the system of claim 4, wherein the instructions are further operative to: Aslan further teaches for each proxy ML model, select a remote node for further training, based on at least a training history of the proxy ML model, wherein deploying the plurality of trained proxy ML models for further training comprises deploying the plurality of trained proxy ML models to the selected remote nodes (Para [0008] “multiple machine learning models can be trained in a collaborative fashion where visibility across models is enabled, which can lead to one machine learning model selecting a learning function that is best suited for another machine learning model” and Para [0052] “if two or more of the student models 302 are capable of using a first learning function available to the teacher model 300, and only the student model 302(M) is capable of using a second learning function, but not the first learning function, the teacher model 300 can choose to train itself with the first learning function to benefit a maximum number of the student models 302” teaches selecting student model (remote node of Proxy ML models) based on first learning and second learning functions, wherein training selected based on the previously selected function for example, first and second student model perform using first learning function than student model M selected to perform only second learning function). Claim 6. As discussed above, Aslan in view of Tsouvalas teaches the system of claim 1, Aslan further teaches wherein weighting results from each of the proxy ML models comprises further weighting the results from each of the proxy ML models based on at least a score assigned to each of the proxy ML models (Para [0051-52] “FIG. 3, a teacher model 300 can be trained in parallel with M student models 302, shown as student models 302(1), 302(2), . . . , 302(M)… individual pairings of student models 302, such as the student model 302(1) and the student model 302(2) can pass information between each other to learn from each other in parallel” and Para [0060] “determining parameter values (e.g., values of weight parameters) for each model in the set of models provided at 602 that optimizes (e.g., minimizes) an objective function for joint training of the set of machine learning models” teaches determining weight of the student model based on the optimization). Claim 7. As discussed above, Aslan in view of Tsouvalas teaches the system of claim 1, Aslan further teaches wherein the primary training dataset comprises unlabeled training cases (Para [0032] “the two models 100 and 102 collaborate with each other during joint training (shown via the path 110 in FIG. 1), the models 100 and 102 can process any suitable unlabeled data” teaches training dataset is unlabeled data). Independent Claim 8. With respect to independent claim 8, claim 8 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 8 is a computerized method that recites steps that correspond to the operations of claim 1. Claim 9. Claim 9 recites analogous limitations to claim 2. Therefore, claim 9 is rejected based on the same rationale as claim 2. Claim 10. Claim 10 recites analogous limitations to claim 3. Therefore, claim 10 is rejected based on the same rationale as claim 3, discussed above. Claim 11. As discussed above, Aslan in view of Tsouvalas teaches the method of claim 8. Aslan further teaches training each of the proxy ML models with the trained primary ML model (Para [0055] “each of the teacher models 500 can influence the training of the student model 502, and vice versa, during joint training” and Figure 3 and 5. teaches training student model (proxy ML model) with the trained teacher model); deploying the plurality of trained proxy ML models to the plurality of remote nodes for further training, wherein each proxy ML model is deployed to a different one of the plurality of remote nodes (Para [0052] “FIG. 3 also shows that training data 304 can be used to train one or more of the machine learning models of FIG. 3, such as the teacher model 300. It is to be appreciated that one or more of the student models 302 can also be trained with at least a portion of the training data 304. The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size…3 also shows that training data 304 can be used to train one or more of the machine learning models of FIG. 3, such as the teacher model 300. It is to be appreciated that one or more of the student models 302 can also be trained with at least a portion of the training data 304. The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size” teaches training student model (proxy ML model) wherein each the student model trained differently); receiving, at the primary node, from the plurality of remote nodes, the further-trained plurality of proxy ML models (Para [0031] “knowledge can be bi-directionally transferred between the first model 100 and the second model 102 during joint training, as depicted visually in FIG. 1 by path 110 between the first model 100 and the second model 102” Figure 1, 2 and 3 teaches knowledge transfer between first model (primary model) and second model (proxy ML models) ); and further train the primary ML model using the plurality of proxy ML models (Para [0055] “each of the teacher models 500 can influence the training of the student model 502, and vice versa” Figure 1, 2 and 3 teaches training teacher model (Primary ML model) using the student model (proxy ML models)), wherein training the primary ML model comprises transfer learning, and wherein further training the primary ML model comprises transfer learning (Para [0006] “Such passing of information (or “transfer of knowledge”) between the machine learning models allows for one machine learning model to influence the other while the multiple machine learning models are trained in parallel” teaches training the teacher model (primary ML model) using the transfer knowledge). Claim 12. Claim 12 recites analogous limitations to claim 5. Therefore, claim 12 is rejected based on the same rationale as claim 5, discussed above. Claim 13. Claim 13 recites analogous limitations to claim 6. Therefore, claim 13 is rejected based on the same rationale as claim 6, discussed above. Claim 14. Claim 14 recites analogous limitations to claim 7. Therefore, claim 14 is rejected based on the same rationale as claim 7, discussed above. Independent Claim 15. With respect to independent claim 15, claim 15 is substantially similar to claim 1 and therefore is rejected on the same ground as claim 1, discussed above. In particular, claim 15 is directed to or more computer storage devices having computer-executable instructions stored thereon, which, upon execution by a computer, cause the computer to perform operations that correspond to the operations of claim 1. Aslan further teaches “One or more computer storage devices having computer-executable instructions stored thereon, which, upon execution by a computer, cause the computer to perform operations comprising:” (Para [0068] “The memory 806, removable storage 816, and non-removable storage 818 are all examples of computer storage media. Computer storage media includes, but is not limited to …”). Claim 16. Claim 16 recites analogous limitations to claim 2. Therefore, claim 16 is rejected based on the same rationale as claim 2, discussed above. Claim 17. Claim 17 recites analogous limitations to claim 3. Therefore, claim 17 is rejected based on the same rationale as claim 3, discussed above. Claim 18. As discussed above, Aslan in view of Tsouvalas teaches the one or more computer storage devices of claim 15, wherein the operations further comprise: Aslan further teaches training each of the proxy ML models with the trained primary ML model (Para [0055] “each of the teacher models 500 can influence the training of the student model 502, and vice versa, during joint training” and Figure 3 and 5. teaches training student model (proxy ML model) with the trained teacher model); deploying the plurality of trained proxy ML models to the plurality of remote nodes for further training, wherein each proxy ML model is deployed to a different one of the plurality of remote nodes (Para [0052] “FIG. 3 also shows that training data 304 can be used to train one or more of the machine learning models of FIG. 3, such as the teacher model 300. It is to be appreciated that one or more of the student models 302 can also be trained with at least a portion of the training data 304. The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size…3 also shows that training data 304 can be used to train one or more of the machine learning models of FIG. 3, such as the teacher model 300. It is to be appreciated that one or more of the student models 302 can also be trained with at least a portion of the training data 304. The M student models 302 can be of the same type and size, or can differ in type (i.e., architecture) and/or size” teaches training student model (proxy ML model) wherein each the student model trained differently); receiving, at the primary node, from the plurality of remote nodes, the further-trained plurality of proxy ML models (Para [0031] “knowledge can be bi-directionally transferred between the first model 100 and the second model 102 during joint training, as depicted visually in FIG. 1 by path 110 between the first model 100 and the second model 102” Figure 1, 2 and 3 teaches knowledge transfer between first model (primary model) and second model (proxy ML models) ); and further train the primary ML model using the plurality of proxy ML models (Para [0055] “each of the teacher models 500 can influence the training of the student model 502, and vice versa” Figure 1, 2 and 3 teaches training teacher model (Primary ML model) using the student model (proxy ML models)), wherein training the primary ML model comprises transfer learning, and wherein further training the primary ML model comprises transfer learning (Para [0006] “Such passing of information (or “transfer of knowledge”) between the machine learning models allows for one machine learning model to influence the other while the multiple machine learning models are trained in parallel” teaches training the teacher model (primary ML model) using the transfer knowledge). Claim 19. Claim 19 recites analogous limitations to claim 5. Therefore, claim 19 is rejected based on the same rationale as claim 5, discussed above. Claim 20. Claim 20 recites analogous limitations to claim 6. Therefore, claim 20 is rejected based on the same rationale as claim 6, discussed above. Claim 21. As discussed above, Aslan in view of Tsouvalas teaches the method of claim 8, further comprising: Aslan further teaches receiving logits from each respective proxy ML model regarding the respective training case (Para [0032] “the two models 100 and 102 collaborate with each other during joint training (shown via the path 110 in FIG. 1), the models 100 and 102 can process any suitable unlabeled data”; Para [0034] “the outputs (ψ(te) and ψ(st)) can comprise logits (zte and zst) generated by the multiple models 100 and 102. In some implementations, the outputs (ψ(te) and ψ(st)) can comprise unnormalized probabilities. In fact, the outputs (ψ(te) and ψ(st)) can comprise any value from an intermediate stage in the models 100 and 102 ” receiving logits zst from each of the student models/proxy models regarding the unlabeled dataset/ training case); and weighting the results from each respective proxy ML model based at least on the logits (Para [0034] “the outputs (ψ(te) and ψ(st)) can comprise logits (zte and zst) generated by the multiple models 100 and 102” teaches weight ψ(st) the result from the proxy model/student model based on the logits), wherein the respective label represents a weighted consensus of the plurality of proxy ML models with respect to the respective training case (Para [0036] “In the objective function (2), Φ(te) and Φ(st) are matrices used for the classification terms of the objective function (2) with row-wise stacked outputs of the first (teacher) model 100 and the second (student) model 102, respectively. Again, the outputs in the matrices Φ(te) and Φ(st) can comprise probability outputs, such as probabilities computed using the softmax function, logits (zte and zst), or any other suitable outputs from the models 100 and 102”; Para [0032] “the two models 100 and 102 collaborate with each other during joint training (shown via the path 110 in FIG. 1), the models 100 and 102 can process any suitable unlabeled data” teaches respective classification/label represents a weights ψ(st) of student models with respect to unlabeled data/training case). Response to Arguments In response to amendments and remarks filed on 12/11/2025. The rejections of claims 1-20 under 35 U.S.C. 101, set forth in the previous Office Action, have been withdrawn due to Applicant’s claim amendments and remarks. Applicant's arguments filed on 12/11/2025 with respect to 35 U.S.C. 103 rejections of claims 1-20 have been fully considered but they are moot. With respect to the 35 U.S.C. 103 rejection of claims 1-20, applicant asserts, “Thus, as discussed during the interview, Ben-Itzhak does not use the confidence levels of the different models to determine a weighted consensus label during training. Accordingly, Ben-Itzhak does not teach or suggest at least "determining a respective label for the respective training case by weighting results from each respective proxy ML model of the plurality of proxy ML models based on at least a respective confidence of the respective proxy ML model regarding the respective training case," as recited by amended independent claim 1 (emphasis added). Accordingly, for at least this reason, Applicant respectfully requests that the Office withdraw this rejection and allow independent claim 1. Although of different scope, independent claims 8 and 15 are allowable for at least similar reasons as discussed above with respect to claim 1. Claims 2-7 depend from independent claim 1, claims 9-14 and 21 depend from independent claim 8, and claims 16-20 depend from independent claim 15. These dependent claims are allowable as depending from their respective allowable base claims. These dependent claims are also allowable for their own recited features which, in combination with those recited in their respective base claims, are not taught or suggested by the cited references” (remarks Pg. 13). Examiner Response: This argument has been considered but is moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in this argument. A newly cited prior art, (Tsouvalas (“Federated Self-Training for Semi-Supervised Audio Recognition”)) has been applied to teach the limitations referred to in this argument. Therefore, the claims are now rejected under 35 U.S.C. 103 using the combination of Aslan and the newly-cited Tsouvalas reference, as detailed above. Conclusion 6. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Lokesha Patel whose telephone number is (571)272-6267. The examiner can normally be reached 8 AM - 4 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOKESHA PATEL/ Examiner, Art Unit 2125 /KAMRAN AFSHAR/ Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Mar 22, 2022
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Nov 04, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Examiner Interview Summary
Dec 11, 2025
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585938
Consensus Driven Learning
2y 5m to grant Granted Mar 24, 2026
Patent 12572811
CONTROLLABLE AND INTERPRETABLE CONTENT CONVERSION
2y 5m to grant Granted Mar 10, 2026
Patent 12561556
DEVICES, SYSTEMS, METHODS, AND MEDIA FOR DOMAIN ADAPTATION USING HYBRID LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12536454
TODDLER-INSPIRED BAYESIAN LEARNING METHOD AND COMPUTING APPARATUS FOR PERFORMING THE SAME
2y 5m to grant Granted Jan 27, 2026
Patent 12530615
INTELLIGENT OVERSIGHT OF MULTI-PARTY ENGAGEMENTS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.0%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month