Prosecution Insights
Last updated: April 19, 2026
Application No. 18/307,778

SYSTEMS AND METHODS FOR SECURE MULTI-MODEL TRAINING WITHIN A ZERO-TRUST ENVIRONMENT

Non-Final OA §103
Filed
Apr 26, 2023
Examiner
XIE, EDGAR WANGSHU
Art Unit
2433
Tech Center
2400 — Computer Networks
Assignee
Beekeeperai Inc.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
14 granted / 17 resolved
+24.4% vs TC avg
Strong +38% interview lift
Without
With
+37.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
15 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 17 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Request for Continued Examination (RCE) filed on 12/03/2025 for patent application 18/307,778 has been acknowledged. Claims 11-16 are currently pending and have been considered below. Claim 11 is the independent claim. Claim 11 has been amended. No new claims have been added. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/03/2025 has been entered. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. (a) or pre-AIA 35 U.S.C. 112, first paragraph as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 63/268,056, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Accordingly, claims 11-20 are not entitled to the benefit of the prior application. Because applicant elected Group II, claims 11-16, examiner is currently only examining claims 11-16. Claims 11-16 are not supported by the prior-filed application, Application No. 63/268,056, filed on 4/29/2022. Claims 11-16 are supported by figures 18-26 and paragraphs 00121-00140 in the specification within the application, Application No. 18/307,778, filed on 4/26/2023. Drawings The drawings filed on 4/26/2023 are accepted by the examiner. Response to Arguments Applicant’s arguments filed on 12/03/2025 with respect to claims 11-16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 13 and 14 are objected to as failing to provide proper antecedent basis for the claimed subject matter. Claims 13 and 14 recite “wherein the optimization includes”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Callcut et al. (US Patent Application Publication No. US 2020/0311300 A1, hereinafter, Callcut) in view of Givental et al. (US Patent Application Publication No. US 20210264025 A1, hereinafter, Givental) and further in view of Cheng et al. (US Patent Application Publication No. US 2022/0405623 A1, hereinafter, Cheng) and Mohammed et al. (US Patent Application Publication No. US 2024/0202351 A1, hereinafter, Mohammed). Regarding Claim 11, Callcut discloses: A computerized method of secure model generation in a sequestered computing node comprising (Callcut, ¶ [0051], “FIG. 1 shows an AI ecosystem 100 that allows for secure, federated computing on private data sets 105a-n (‘n’ represents any natural number), including augmentation of data from data sets 105a-n, algorithm/model 110a-n deployment, algorithm/model 110a-n validation, algorithm/model 110a-n optimization (training/testing), and federated training of algorithms/models 110a-n on multiple data sets 105a-n.”): receiving an algorithm in a secure computing enclave (Callcut, ¶[0054], “The AI system 205 may be in communication with one or more algorithm developers and is configured to receive one or more algorithms or models.” ¶[0063] “At block 305, a third-party algorithm developer (a first entity) provides one or more algorithms or models to be optimized and/or validated in a new project.”); Callcut does not explicitly teach the following limitations that Givental teaches: performing automated multi-model training on the algorithm to generate a plurality of trained models, wherein the multi-model training includes training by applying regressors, decision trees, and neural networks to a single machine learning training problem (Givental, ¶[0027], “The level one training dataset may be used to train the plurality of ML models using a machine learning training process, as is generally known in the art, e.g., supervised or unsupervised machine learning process.”); generating a leaderboard of the plurality of trained models (Givental, ¶[0040], “In one illustrative embodiment, the ML model selector may be used to generate an ensemble of ML models to be used to process an incoming log data structure. … the counts of the ML models may be used to select a top N number of ML models to include in the ensemble, where N is any integer suitable to the particular implementation desired, e.g., the top 3 ML models.”); ranking the plurality of trained models in the leaderboard based upon a (Givental, ¶[0032], “Each of these performance factors are scored according to a predetermined range for scoring, and the scores are combined to generate a relative ranking for each of the trained ML models …” ¶[0060], “The trained ML models 130 are then used to make predictions 140 of the classifications of the logs in the level two training dataset 120, e.g., classify each of the logs 122 in the level two training dataset 120 as to whether it should be closed (non-anomalous) or escalated (anomalous). These classifications may be compared, by ML model ranking engine 150, to the corresponding ground truth classifications 123 to determine a loss or error of the corresponding ML model with regard to particular performance factors, such as accuracy, confidence, and risk.”) selecting a top model from the leaderboard (Givental, ¶[0067], “The counts may then be used by the ensemble generation engine 180 to select the top N number of ML models from the plurality of ML models 130 to be included in the ensemble, where a ML model is a “top” ML model based on having a higher count value than other ML models in the plurality of ML models 130. Thus, if N=3, then the ML models having the 3 highest count values are selected for inclusion in the ensemble 190.”); and Callcut in view of Givental are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “machine learning models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Callcut with Givental to “performing automated multi-model training on the algorithm to generate a plurality of trained models, wherein the multi-model training includes training by applying regressors, decision trees, and neural networks to a single machine learning training problem generating a leaderboard of the plurality of trained models; ranking the plurality of trained models in the leaderboard based upon a metric calculated by a performance metric, wherein the performance metric includes accuracy for a classification problem; selecting a top model from the leaderboard;” because, “It can be appreciated that some ML models operate better for different types of security risks based on the particular training algorithms used to train the ML models and/or the training data used to train the ML models. That is, some ML models, due to their training, may be better predictors and classifiers of particular patterns of input data indicative of particular security threats, anomalies, attacks, etc.” (Givental, ¶[0022]). performing security processing on the top model to generate a secure model (Cheng, ¶[0081], “The prediction engine 340 is configured to execute user code defining a trained machine learning model. As part of executing the user code to generate model predictions, the processing shard 300 can execute the prediction engine 340 in a sandboxed process, to eliminate potential security issues when running the user code.”). Callcut in view of Givental and further in view of Cheng are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “machine learning models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Callcut in view of Givental with Cheng to “performing security processing on the top model to generate a secure model” because, “The disclosure is directed to a query-driven machine learning platform for generating feature attributions and other data for interpreting the relationship between inputs and outputs of a machine learning model.” (Cheng, Abstract) Callcut in view of Givental and further in view of Cheng does not explicitly teach the following limitations that Mohammed teaches: a hybrid metric calculated by combining an epsilon parameter for each model with a performance metric, wherein the performance metric includes accuracy, F1 score, precision and recall for a classification problem on structured data or DICE score for an image classification (Mohammed, ¶[0056], “The model may be tested using the testing set to provide an unbiased evaluation of a final model fit on the training data set. Differential privacy may be applied, iterated through epsilon (a metric of privacy loss …), and accuracy, F1 score (combining precision and recall of a classifier into a single metric by taking their harmonic mean), recall (fraction of samples from a class which are correctly predicted by the model; …), precision (class-specific performance metric applied when class distribution is imbalanced; …), and AUC score may be measured.”); Callcut in view of Givental and further in view of Cheng and Mohammed are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “machine learning models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Callcut in view of Givental and further in view of Cheng with Mohammed to incorporate “a hybrid metric calculated by combining an epsilon parameter for each model with a performance metric, wherein the performance metric includes accuracy, F1 score, precision and recall for a classification problem on structured data or DICE score for an image classification” because, “systems and methods according to embodiments of the present disclosure may provide privacy detection and mitigation in connection with data, models, synthetic data, and/or fed learning.” (Mohammed, ¶[0003]) Regarding Claim 12, Callcut in view of Givental and further in view of Cheng and Mohammed teaches: The method of claim 11, wherein the algorithm is a plurality of algorithms received from a plurality of data stewards (Callcut, ¶[0054], “The AI system 205 may be in communication with one or more algorithm developers and is configured to receive one or more algorithms or models.” ¶[0063] “At block 305, a third-party algorithm developer (a first entity) provides one or more algorithms or models to be optimized and/or validated in a new project.”). Regarding 13, Callcut in view of Givental and further in view of Cheng and Mohammed teaches: The method of claim 11, wherein the optimization includes ranking the plurality of trained models (Givental, ¶[0032], “Each of these performance factors are scored according to a predetermined range for scoring, and the scores are combined to generate a relative ranking for each of the trained ML models …”) by data exfiltration risk (Mohammed, ¶[0054], “With differential privacy, if the effect of making an arbitrary single substitution in the database is small enough, the query result cannot be used to infer much about any single individual, and therefore, provides privacy. Differential privacy may provide a measurable way to balance privacy and data accuracy when publicly releasing aggregate data on private datasets.”). Regarding Claim 14, Callcut in view of Givental and further in view of Cheng and Mohammed teaches: The method of claim 11, wherein the optimization includes ranking the plurality of trained models by accuracy (Givental, ¶[0061], “Thus, for example, for each data instance (e.g., log entry) 122 in the level two training dataset 120, a loss function-based ranking of each prediction generated by each of the ML models, in the plurality of ML models 130, is formed based on the performance factors (accuracy, confidence, risk) of the loss determination for the corresponding ML model.”). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Callcut et al. (US Patent Application Publication No. US 2020/0311300 A1, hereinafter, Callcut) in view of Givental et al. (US Patent Application Publication No. US 20210264025 A1, hereinafter, Givental) and further in view of Cheng et al. (US Patent Application Publication No. US 2022/0405623 A1, hereinafter, Cheng), Mohammed et al. (US Patent Application Publication No. US 2024/0202351 A1, hereinafter, Mohammed), and Chung et al. (US Patent Application Publication No. US 2018/0341851 A1, hereinafter, Chung). Regarding Claim 15, Callcut in view of Givental and further in view of Cheng and Mohammed teaches: The method of claim 11, wherein the security processing (Cheng, ¶[0081], “The prediction engine 340 is configured to execute user code defining a trained machine learning model. As part of executing the user code to generate model predictions, the processing shard 300 can execute the prediction engine 340 in a sandboxed process, to eliminate potential security issues when running the user code.”) Callcut in view of Givental and further in view of Cheng and Mohammed does not explicitly teach the following limitation that Chung teaches: includes at least one of weight truncation and additional weight addition (Chung, ¶[0034], “During a tuning phase, adjustments can be made in the area of approximate computing by dynamically adjusting the tuning parameters, when the opportunity arises e.g., during a training or production run.” ¶[0035], “For example, adjustments can include: using dropout sparsification to send a quasi-random subset of weights, rolling updates that transmit only a pre-specified subset of weights in a round-robin fashion, variable bit truncations of the weights to be combined, and a combination of the foregoing.”). Callcut in view of Givental and further in view of Cheng, Mohammed, and Chung are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “machine learning models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Callcut in view of Givental and in view of Cheng and Mohammed with Chung to implement security processing to include “at least one of weight truncation and additional weight addition” because, “Optimizing the performance of a machine learning system includes: … dynamically updating the n-dimensional approximate computing configuration space by adjusting the at least one tuning parameter.” (Chung, Abstract) Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Callcut et al. (US Patent Application Publication No. US 2020/0311300 A1, hereinafter, Callcut) in view of Givental et al. (US Patent Application Publication No. US 20210264025 A1, hereinafter, Givental) and further in view of Cheng et al. (US Patent Application Publication No. US 2022/0405623 A1, hereinafter, Cheng), Mohammed et al. (US Patent Application Publication No. US 2024/0202351 A1, hereinafter, Mohammed), and Kim et al. (US Patent Application Publication No. US 2016/0315930 A1, hereinafter, Kim). Regarding Claim 16, Callcut in view of Givental and further in view of Cheng and Mohammed teaches: The method of claim 11, further comprising: generating a report on performance of the secure model (Callcut, ¶[0081], “At block 365, one or more reports are generated and delivered to the algorithm developer based on the results of block 360. In some instances, the reports are generated in accordance with the training/testing report requirements and/or validation report requirements defined in block 310.”); Callcut in view of Givental and further in view of Cheng and Mohammed does not explicitly teach the following limitation that Kim teaches: providing the report to a separate secure report confirmation service; providing protected data to the secure report confirmation service; and validating the report for data exfiltration by comparison to the secure report confirmation service (Kim, ¶[0032], “The cloud data discovery system 100 is a part of a data loss prevention (DLP) system of the company and may be formed of at least one server. The cloud data discovery system 100 accesses user data of the cloud service through a cloud application program interface (API), checks the user data according to a preset DLP policy, and stores and reports a checking result. As necessary, the cloud data discovery system 100 controls leakage of information through warning, the deletion of data, and encryption.”). Callcut in view of Givental and further in view of Cheng, Mohammed, and Kim are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area.” Namely, they pertain to the field of “machine learning models.” It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Callcut in view of Givental and further in view of Cheng and Mohammed with Chung to implement security processing to include “providing the report to a separate secure report confirmation service; providing protected data to the secure report confirmation service; validating the report for data exfiltration by comparison to the secure report confirmation service” because, the “invention relates to a cloud data discovery method and system for private information protection and data loss prevention.” (Kim, Abstract) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDGAR W XIE whose telephone number is (703)756-4777. The examiner can normally be reached Monday - Friday, 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY PWU can be reached at (571)272-6798. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDGAR W XIE/ Examiner, Art Unit 2433 /WASIKA NIPA/ Primary Examiner, Art Unit 2433
Read full office action

Prosecution Timeline

Apr 26, 2023
Application Filed
May 02, 2025
Non-Final Rejection — §103
Aug 04, 2025
Response after Non-Final Action
Aug 04, 2025
Response Filed
Sep 05, 2025
Final Rejection — §103
Dec 03, 2025
Request for Continued Examination
Dec 15, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602475
AGGREGATING INPUT/OUTPUT OPERATION FEATURES EXTRACTED FROM STORAGE DEVICES TO FORM A MACHINE LEARNING VECTOR TO CHECK FOR MALWARE
2y 5m to grant Granted Apr 14, 2026
Patent 12579267
Methods and Systems for Analyzing Environment-Sensitive Malware with Coverage-Guided Fuzzing
2y 5m to grant Granted Mar 17, 2026
Patent 12579281
Dynamic Prioritization of Vulnerability Risk Assessment Findings
2y 5m to grant Granted Mar 17, 2026
Patent 12566844
SYSTEM AND METHOD FOR COLLABORATIVE SMART EVIDENCE GATHERING AND INVESTIGATION FOR INCIDENT RESPONSE, ATTACK SURFACE MANAGEMENT, AND FORENSICS IN A COMPUTING ENVIRONMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12513001
BLOCKCHAIN VERIFICATION OF DIGITAL CONTENT ATTRIBUTIONS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+37.5%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 17 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month