Prosecution Insights
Last updated: April 19, 2026
Application No. 18/759,438

PROTECTED TRAINING OF PRIVATE ADAPTER MODELS FOR A HOSTED FOUNDATION MODEL

Non-Final OA §102§103
Filed
Jun 28, 2024
Examiner
BUI, JONATHAN A
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
Crowdstrike Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
479 granted / 590 resolved
+23.2% vs TC avg
Strong +24% interview lift
Without
With
+24.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
16 currently pending
Career history
606
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
26.4%
-13.6% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 590 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 9 is objected to because of the following informalities: The claim appears to inadvertently depend on claim 1, instead of claim 8. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6, 8-11, 13, 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang, et al ("Diabetic retinopathy diagnosis and treatment system based on federal learning", 08-22-2023, CN-116630353 A (English Translation), hereinafter referred to as Zhang). Claim 1 is an independent claim and Zhang discloses a method comprising: storing a learning model on local memory of a client computing device (each client comprises…a federal model, see claim 1); wherein the learning model configures the client computing device to update an adapter weight set (in step S5, each client iterates and trains the federal global model according to the pre-processing data, claim 1; in step S5, the learning data comprises…a model weight, claim 5), the learning model and the adapter weight set being protected from outbound network connections of the client computing device (the sent learning data is performed with communication encryption, the model weight is performed with model encryption by using differential privacy, claim 6); and wherein the learning model configures the client computing device to place a labeled dataset into a feature space (a diabetic retinopathy diagnosis and treatment system based on federal learning…each client comprises…a federal model, see claim 1; in the diabetic retinopathy diagnosis and treatment system based on federated learning provided by the present invention…categorize and label image data, page 4, 1st para.). As per claim 2, claim 1 is incorporated and Zhang further discloses further comprising: loading the labeled dataset into memory (each client comprises a plurality of diabetic retinopathy image data as local data, page 2, last para.; image data is selected from the pre-processing as a sample, page 6, para. 6); designating a loss function for placing the labeled dataset in a feature space (step S5, each client iteratively trains the federal global model…loss function used in training…calculating the sample through the loss function, page 6, para. 4-8); training a learning model on the designated loss function (see page 6, para. 5-8); and updating the adapter weight set based on a dataset placement learned by the learning model, wherein the adapter weight set is protected during each epoch at a first layer of the learning model (adjust inter-class weight from loss function, page 6, para. 5-8). As per claim 3, claim 2 is incorporated and Zhang further discloses wherein protecting the adapter weight set comprises performing a transformation operation upon the adapter weight set at the first layer (the model weight is performed on the model weight in a differential privacy, the differential privacy comprises a weight parameter noise adding…for gradient noise adding, page 3, last para.). As per claim 4, claim 2 is incorporated and Zhang further discloses wherein protecting the adapter weight set comprises performing a noise injection operation upon the adapter weight set at the first layer (the model weight is performed on the model weight in a differential privacy, the differential privacy comprises a weight parameter noise adding…for gradient noise adding, page 3, last para.). As per claim 6, claim 2 is incorporated and Zhang further discloses further comprising transmitting the updated adapter weight set to a cloud computing system hosting a hosted foundation model (each client sends learning data (including model weight) to the server, page 6, lines 4-5); wherein the adapter weight set occupies a parameter space of reduced dimensionality relative to a foundation weight set of the hosted foundation model (each client trains the federal global model and obtains learning data (including model weight) to send to the server after training is completed, page 6, para. 3-5; the server side aggregates all the learning data according to the weighted federation average, then updates the federation global model according to the result of the aggregation, page 7, 4th to last para.). Claim 8 is an independent claim corresponding to independent claim 1 and is therefore rejected for similar reasoning. Zhang further discloses one or more processors and memory communicatively coupled to the one or more processors as claimed (inherent client 20 hardware, see page 5, para. 2). As per Claim 9, claim [[8]] 1 is incorporated. Claim 9 corresponds to claim 2 and is therefore rejected for similar reasoning. As per claim 10, claim 9 is incorporated. Claim 10 corresponds to claim 3 and is therefore rejected for similar reasoning. As per claim 11, claim 9 is incorporated. Claim 11 corresponds to claim 4 and is therefore rejected for similar reasoning. As per claim 13, claim 9 is incorporated. Claim 13 corresponds to claim 6 and is therefore rejected for similar reasoning. Claim 15 is an independent claim and Zhang discloses a method comprising: receiving, at a cloud computing system, a plurality of trained adapter weight sets updated by respective client computing devices (each client…sends the obtained learning data to the service end after the iterative training is finished, see claim 1; the learning data comprises a model weight, see claim 5) by training respective copies of a private adapter model based on labeled datasets from private threat databases (in step S5, each client iterates and trains the federal global model according to the pre-processing local data, claim 1; in step S5, the learning data comprises…a model weight, claim 5); training a hosted foundation model based on a labeled dataset from a hosted threat database (the service terminal 10 constructs a federation global model according to its own configuration data, page 6, para. 3; see also claim 1, S4); updating the plurality of trained adapter weight sets during the training (iterative training with obtaining weight learning data, see claims 1 and 5); and aggregating the plurality of trained adapter weight sets at a final layer of the hosted foundation model (updates the federation global model according to the aggregated result after iterative training, see claim 1). As per claim 16, claim 15 is incorporated and wherein aggregating the plurality of trained adapter weight sets is performed by an order-invariant aggregation function (updates the federation global model according to the aggregated result after iterative training, see claim 1; the service end aggregates all learning data according to the weighted federation average (e.g. it doesn’t matter what order the obtained learning data is received if the aggregated data is applied to the weighted federation average), page 3, last para.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 7, 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang as applied above, and further in view of Li, et al (US Patent No. 12,314,839 B1, hereinafter referred to as Li). As per claim 5, claim 2 is incorporated and Zhang does not specifically disclose, but Li teaches, wherein a layer of the learning model comprises a rank-deficient coefficient matrix (multi-head attention mechanism computes three matrices by linearly projecting the input embeddings using learned weight matrices, col. 15, lines 60-67). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the applicant’s claimed invention to incorporate Li’s federated learning with Zhang’s federated learning because it would have allowed for fast and efficient data compression for information flowing between systems (Li, col. 2, lines 14-18). As per claim 7, claim 1 is incorporated and Zhang does not specifically disclose, but Li teaches, wherein the learning model is structured based on multi-head attention (multi-head attention mechanism, see col. 15, lines 60-67). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the applicant’s claimed invention to incorporate Li’s federated learning with Zhang’s federated learning because it would have allowed for fast and efficient data compression for information flowing between systems (Li, col. 2, lines 14-18). As per claim 12, claim 9 is incorporated. Claim 12 corresponds to claim 5 and is therefore rejected for similar reasoning. As per claim 14, claim 9 is incorporated. Claim 14 corresponds to claim 7 and is therefore rejected for similar reasoning. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: WO 2025262183 A1 – generally teaches collaborative learning with a central server and local clients where calculated model weights have noise added at the central server to the weights of the aggregated model as a privacy-enhancing measure. Pub. No. US 2024/0193433 A1 – generally teaches federated learning and an aggregation server that adds noise to locally computed weight parameters received from clients. Pub. No. US 2024/0193433 A1 – generally teaches adding controlled noise to weight parameters in federated learning. Pub. No. US 2015/0324686 A1 – generally teaches distributed model learning where local processing units compute local model updates to weights. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN A BUI whose telephone number is (571)270-7168. The examiner can normally be reached Mon-Fri: 9AM - 530PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R Taylor can be reached at (571) 272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN A BUI/Primary Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Feb 14, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603931
METHODS AND SYSTEMS FOR ENCODER PARAMETER SETTING OPTIMIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12603893
METHOD AND SYSTEM FOR DYNAMIC USER APPLICATION CONTROL SERVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12596825
ENVIRONMENT DETECTION AND OPTIMIZATION FOR AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12596487
A DEVICE AND SYSTEM FOR THE SECURE STORAGE OF DATA IN A DISTRIBUTED MANNER
2y 5m to grant Granted Apr 07, 2026
Patent 12580984
ENABLING MULTI-EDGE APPLICATIONS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+24.5%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 590 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month