Prosecution Insights
Last updated: April 19, 2026
Application No. 18/162,894

REPROGRAMMABLE FEDERATED LEARNING

Final Rejection §103§112
Filed
Feb 01, 2023
Examiner
LEE, WILLIAM MICHAEL
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Rensselaer Polytechnic Institute
OA Round
2 (Final)
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§101
23.3%
-16.7% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION The action is in response to the original filing on February 1, 2023 and the Remarks and Amendments filed on January 8, 2026. Claims 1-20 are pending and have been considered below. Claims 1, 9, and 17 are independent claims. Claims 1-3, 5-7, 9-10, 17-18 are amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on November 12, 2025 is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term the training component in claim 8 lacks sufficient antecedent basis as there is no prior reference to a training component made in these claims. For examination purposes, this term is interpreted to mean the processor of claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over McMahan et al. (US 20190227980 A1) in view of Han et al. (“Detection of Face Mask Wearing for COVID-19 Protection based on Transfer Learning and Classic CNN Model,” hereinafter Han) and further in view of An et al. (“Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models,” hereinafter An). Regarding claim 1, McMahan teaches a server device (Fig. 2 – 200, 210, ¶134 “The system 200 includes a server 210”), comprising: a processor that executes computer-executable components stored in a non-transitory computer-readable memory to perform operations (Fig. 2 – 212-216, ¶136 “The one or more memory devices 214 can store information accessible by the one or more processors 212, including computer-readable instructions 216 that can be executed by the one or more processors 212”) comprising: Regarding the limitation sharing a pre-trained frozen neural network with a set of client devices, McMahan teaches sharing a neural network with a set of client devices (Fig. 1 – 102-106, ¶127 “The one or more server computing device(s) 104 can be configured to access machine-learned model 106, and to provide model 106 to a plurality of client computing devices 102. Model 106 can be… a neural network”). However, McMahan fails to teach a pre-trained frozen neural network… Han, in the same field of endeavor, teaches a pre-trained frozen neural network (Page 3, Section III, ¶4 “feature extraction can use the pre-trained network on the source task (… freeze the weights of all layers…)”). Regarding the limitation and orchestrating reprogrammable federated learning of the pre-trained and frozen neural network among the set of client devices, wherein reprogrammable federated learning comprises foregoing altering already-trained internal parameters of the pre-trained and frozen neural network but rather sandwiching the pre-trained and frozen neural network in series between a newly-inserted trainable input layer and a newly-insertable trainable output layer, McMahan teaches and orchestrating federated learning of the neural network among the set of client devices (¶54 “Example algorithms provided by the present disclosure are based on… federated learning. For example, in federated learning, a shared model can be trained while leaving the training data on each user's client computing device”). However, McMahan fails to teach and orchestrating reprogrammable federated learning of the pre-trained and frozen neural network… wherein reprogrammable federated learning comprises foregoing altering already-trained internal parameters of the pre-trained and frozen neural network but rather sandwiching the pre-trained and frozen neural network in series between a newly-inserted trainable input layer and a newly-insertable trainable output layer. Han teaches foregoing altering already-trained internal parameters of the pre-trained and frozen neural network but rather sandwiching the pre-trained and frozen neural network in series between… a newly-insertable trainable output layer (Page 3, Column 1, Section III, ¶4 “feature extraction can use the pre-trained network on the source task (remove the last fully connected layer, freeze the weights of all layers except the fully connected layer, replace it with a new classifier with random weights, and train only that layer) as the feature extractor for the target task,” wherein training only the “new classifier with random weights,” or the newly-insertable trainable output layer, while freezing “the weights of all layers except the fully connected layer,” encompasses foregoing altering already-trained internal parameters of the pre-trained and frozen neural network). However, Han fails to teach wherein reprogrammable federated learning comprises… sandwiching the pre-trained and frozen neural network in series between a newly-inserted trainable input layer… An, in the same field of endeavor, teaches wherein reprogrammable federated learning comprises… sandwiching the pre-trained and frozen neural network (Page 1, Abstract: “By only tuning continuous prompts with a frozen pre trained language model (PLM), prompt-tuning takes a step towards deploying a shared frozen PLM to serve numerous downstream tasks,” Page 7, Column 1, Section 6: “Section 6.1 analyzes the performance with different PLM backbones,” Section 6.1: “We evaluate the effectiveness on different back bones from two aspects: backbone architectures (GPT2-Large and T5-Large)… with word embeddings as input”) in series between a newly-inserted trainable input layer (Page 2, Fig. 2 depicts an “Input-Adapter” inserted below a “Frozen PLM,” Page 2, Column 2, ¶1 “we add a lightweight trainable module between word embeddings and the bottom layer of the PLM, to adjust the encoding of unfamiliar inputs directly (i.e., the “input-adapter” module in Figure 2),” Page 5, Fig. 4 and Equation (9) depict the “Input-Adapter,” Page 5, Col. 1, ¶1 “Here σ is an element-wise nonlinear activation function (ReLU). W1 and W2 are learnable matrices. We denote T (·) as an input-adapter, since it plays the role that adapts the surface representations of x to better utilize the frozen PLM. Figure 4 illustrates this module,” a person having ordinary skill in the art would recognize that an “input-adapter” that has “learnable matrices,” uses a “nonlinear activation function (ReLU),” and adapts input representations before forwarding the results to the “Frozen PLM” functions in substantially the same way as a trainable input layer; the act of sandwiching the pre-trained and frozen neural network in series between a newly-inserted trainable input layer, as taught by An, and a newly-insertable trainable output layer, as taught by Han, functions in substantially the same way as reprogrammable federated learning) … McMahan, Han, and An are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the newly-inserted trainable input layer and pre-trained and frozen neural network of An and the newly-insertable trainable output layer and pre-trained and frozen neural network of Han with the neural network, federated learning, client devices, and server device of McMahan. The motivation to do so is to “significantly reduce the computational resources required for deep learning and substantially improve the accuracy of convolutional neural networks under small sample training” (Han, Page 1, Column 1, ¶1) and to “achieve a comparable or even better performance than fine-tuning” (An, Abstract). Regarding claim 2, McMahan in view of Han and further in view of An teaches the server device of claim 1 (and thus the rejection of claim 1 is incorporated). Regarding the limitation and wherein the reprogrammable federated learning involves the at least one trainable input layer and the at least one trainable output layer, but not the pre-trained and frozen neural network, being locally adjusted by the set of client devices, McMahan teaches and wherein the federated learning involves the at least one trainable input layer and the at least one trainable output layer… being locally adjusted by the set of client devices (Fig. 1 – 106, “Model 106 can be… a neural network,” Fig. 3 – 306-308, ¶24 “after receiving the machine-learned model, each selected client computing device can then determine a local update,” although not explicitly stated, a person having ordinary skill in the art would recognize that it is implicit that federated learning involves the at least one trainable input layer and the at least one trainable output layer of a neural network when training a neural network with federated learning). However, McMahan fails to teach reprogrammable federated learning and but not the pre-trained and frozen neural network, being locally adjusted… Han teaches but not the pre-trained and frozen neural network, being locally adjusted (Page 3, Column 1, Section III, ¶4 “feature extraction can use the pre-trained network on the source task (remove the last fully connected layer, freeze the weights of all layers except the fully connected layer, replace it with a new classifier with random weights, and train only that layer)”). However, Han fails to teach the reprogrammable federated learning. An teaches the reprogrammable federated learning (Fig. 2, Page 2, Column 2, ¶1 “we add a lightweight trainable module between word embeddings and the bottom layer of the PLM,” wherein the combination of the trainable input layer and the pre-trained and frozen neural network, as taught by An, and the trainable output layer and the pre-trained and frozen neural network, as taught by Han, functions in substantially the same manner as the reprogrammable federated learning as described above with respect to claim 1). McMahan, Han, and An are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the trainable input layer and pre-trained and frozen neural network of An and the trainable output layer and pre-trained and frozen neural network of Han with the federated learning and client devices of McMahan. The motivation to do so is to “significantly reduce the computational resources required for deep learning and substantially improve the accuracy of convolutional neural networks under small sample training” (Han, Page 1, Column 1, ¶1) and to “achieve a comparable or even better performance than fine-tuning” (An, Abstract). Regarding claim 3, McMahan in view of Han and further in view of An teaches the server device of claim 2 (and thus the rejection of claim 2 is incorporated). Regarding the limitation wherein, during an iteration of the reprogrammable federated learning, the processor also: shares a global internal parameter value array of the at least one trainable input layer and of the at least one trainable output layer with the set of client devices, McMahan teaches wherein, during an iteration of the federated learning, the processor also: shares a global internal parameter value array of the at least one trainable input layer and of the at least one trainable output layer with the set of client devices (Fig. 1 – 106, “Model 106 can be… a neural network,” Fig. 3 – 304-306, ¶149 “At (304), the method (300) can include providing, by the one or more server computing devices, the machine-learned model to the selected client computing devices, and at (306), the method (300) can include receiving, by the selected client computing devices, the machine-learned model. In some implementations, the machine-learned model can be, for example, a global set of parameters,” a person having ordinary skill in the art would recognize that sharing a global internal parameter value array of the at least one trainable input layer and of the at least one trainable output layer with the set of client devices is implicit when training a neural network with federated learning). However, McMahan fails to teach the reprogrammable federated learning. The combination of Han and An teach the reprogrammable federated learning (Han: Page 3, Column 1, Section III, ¶4, and An: Fig. 2, Page 2, Column 2, ¶1, as described above with respect to claim 1). McMahan further teaches and instructs the set of client devices to locally update the global internal parameter value array of the at least one trainable input layer and of the at least one trainable output layer using local training datasets, thereby causing the set of client devices to respectively generate a set of locally-updated internal parameter value arrays of the at least one trainable input layer and of the at least one trainable output layer (Fig. 1 – 106, “Model 106 can be… a neural network,” Fig. 2 – 238, Fig. 3 – 306-310, ¶143 “a client device 230 can receive a machine-learned model (such as a set of global parameters) from the server 210, train the machine-learned model based at least in part on the local dataset to generate a locally-trained model (such as updated local values for the global set of parameters for the machine-learned model), determine a difference between the machine-learned model and the locally-trained model (such as a difference between the global parameters and the updated local values), and clip the difference to generate the local update. In some implementations, the local update can be expressed in a vector, a matrix, or other suitable format,” a person having ordinary skill in the art would recognize that locally updating the global internal parameter value array of the at least one trainable input layer and of the at least one trainable output layer using local training datasets is implicit when training a neural network with federated learning). McMahan, Han, and An are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the trainable input layer and pre-trained and frozen neural network of An and the trainable output layer and pre-trained and frozen neural network of Han with the processor, federated learning, client devices, global internal parameter value arrays, locally-updated internal parameter value arrays, and local training datasets of McMahan. The motivation to do so is to “significantly reduce the computational resources required for deep learning and substantially improve the accuracy of convolutional neural networks under small sample training” (Han, Page 1, Column 1, ¶1) and to “achieve a comparable or even better performance than fine-tuning” (An, Abstract). Regarding claim 5, McMahan in view of Han and further in view of An teaches the server device of claim 3 (and thus the rejection of claim 3 is incorporated). Regarding the limitation wherein, during the iteration of the reprogrammable federated learning, the processor also: accesses the set of locally-updated internal parameter value arrays from the set of client devices, McMahan teaches wherein, during the iteration of the federated learning, the processor also: accesses the set of locally-updated internal parameter value arrays from the set of client devices (Fig. 3 – 310-312, ¶151 “At (310), the method (300) can include providing, by each selected client computing device, the local update to the one or more server computing devices, and at (312), the method (300) can include receiving, by the one or more server computing devices, the local updates. In some implementations, the local updates can be, for example, expressed in one or more matrices, vectors, or other suitable format”). However, McMahan fails to teach the reprogrammable federated learning. The combination of Han and An teach the reprogrammable federated learning (Han: Page 3, Column 1, Section III, ¶4, and An: Fig. 2, Page 2, Column 2, ¶1, as described above with respect to claim 1). McMahan further teaches and aggregates the set of locally-updated internal parameter value arrays into a new global internal parameter value array (Fig. 3 – 314-316, ¶152 “At (314), the method (300) can include determining a differentially private aggregate of the local updates,” ¶156 “At (316), the method (300) can include determining an updated machine-learned model based at least in part on the bounded-sensitivity data-weighted average of the local updates”). McMahan, Han, and An are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the trainable input layer and pre-trained and frozen neural network of An and the trainable output layer and pre-trained and frozen neural network of Han with the processor, federated learning, client devices, locally-updated internal parameter value arrays, and new global internal parameter value array of McMahan. The motivation to do so is to “significantly reduce the computational resources required for deep learning and substantially improve the accuracy of convolutional neural networks under small sample training” (Han, Page 1, Column 1, ¶1) and to “achieve a comparable or even better performance than fine-tuning” (An, Abstract). Regarding claim 6, McMahan in view of Han and further in view of An teaches the server device of claim 5 (and thus the rejection of claim 5 is incorporated). Regarding the limitation wherein, during a next iteration of the reprogrammable federated learning, the processor also: shares the new global internal parameter value array with the set of client devices, McMahan teaches wherein, during a next iteration of the federated learning, the processor also: shares the new global internal parameter value array with the set of client devices (Fig. 3 – 318-320, ¶157 “At (318), the method (300) can include providing, by the one or more server computing devices, the updated machine-learned model to one or more client computing devices, and at (320), the method (300) can include receiving, by the one or more client computing devices, the updated machine-learned model… the updated machine-learned model can be a global model provided to the pool of available client computing devices”). However, McMahan fails to teach the reprogrammable federated learning. The combination of Han and An teach the reprogrammable federated learning (Han: Page 3, Column 1, Section III, ¶4, and An: Fig. 2, Page 2, Column 2, ¶1, as described above with respect to claim 1). McMahan further teaches and instructs the set of client devices to locally update the new global internal parameter value array using the local training datasets (Fig. 3, ¶158 “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the machine-learned model based on locally stored training data over time,” although not explicitly stated, one of ordinary skill in the art would recognize that instructs the set of client devices to locally update the new global internal parameter value array using the local training datasets is implied to occur when “Any number of iterations of local and global updates can be performed”). McMahan, Han, and An are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the trainable input layer and pre-trained and frozen neural network of An and the trainable output layer and pre-trained and frozen neural network of Han with the processor, federated learning, client devices, local training datasets, and new global internal parameter value array of McMahan. The motivation to do so is to “significantly reduce the computational resources required for deep learning and substantially improve the accuracy of convolutional neural networks under small sample training” (Han, Page 1, Column 1, ¶1) and to “achieve a comparable or even better performance than fine-tuning” (An, Abstract). Regarding claim 8, McMahan in view of Han and further in view of An teaches the server device of claim 5 (and thus the rejection of claim 5 is incorporated). Regarding the limitation wherein, during the iteration of the reprogrammable federated learning, the training component determines, via a moment accounts technique, how much of a privacy budget associated with the pre-trained and frozen neural network has been consumed by the iteration, McMahan teaches wherein, during the iteration of the federated learning, the training component determines, via a moment accounts technique, how much of a privacy budget associated with the pre-trained and frozen neural network has been consumed by the iteration (¶113 “a moments accountant M can be used to achieve privacy bounds. For example, the moments accountant… can upper bound the total privacy cost of T steps”). However, McMahan fails to teach the reprogrammable federated learning. The combination of Han and An teach the reprogrammable federated learning (Han: Page 3, Column 1, Section III, ¶4, and An: Fig. 2, Page 2, Column 2, ¶1, as described above with respect to claim 1). McMahan, Han, and An are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the trainable input layer and pre-trained and frozen neural network of An and the trainable output layer and pre-trained and frozen neural network of Han with the federated learning, moment accounts technique, and privacy budget of McMahan. The motivation to do so is to “significantly reduce the computational resources required for deep learning and substantially improve the accuracy of convolutional neural networks under small sample training” (Han, Page 1, Column 1, ¶1) and to “achieve a comparable or even better performance than fine-tuning” (An, Abstract). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over McMahan in view of Han and further in view of An, and further in view of Cheng et al. (US 20230039182, hereinafter Cheng). Regarding claim 4, McMahan in view of Han and further in view of An teaches the server device of claim 3 (and thus the rejection of claim 3 is incorporated). Regarding the limitation wherein the set of client devices perform such local updates via differentially private stochastic gradient descent, McMahan teaches wherein the set of client devices perform such local updates (Fig. 2 – 238, Fig. 3 – 306-310, ¶143 “a client device 230 can receive a machine-learned model (such as a set of global parameters) from the server 210, train the machine-learned model based at least in part on the local dataset to generate a locally-trained model”). However, McMahan fails to teach updates via differentially private stochastic gradient descent. Cheng, in the same field of endeavor, teaches updates via differentially private stochastic gradient descent (Fig. 6 – 601, ¶120 “each edge node device may independently select a differential privacy mechanism… such as a differentially-private stochastic gradient descent (DP-SGD)”). McMahan and Cheng are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the differentially private stochastic gradient descent of Cheng with the local client devices of McMahan. The motivation to do so is to increase privacy during federated learning (Cheng, ¶120 “DP-SGD is a method that improves a stochastic gradient descent algorithm to realize differentially-private machine learning”). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over McMahan in view of Han and further in view of An, and further in view of Baracaldo et al. (“Federated Learning: A Comprehensive Overview of Methods and Applications,” hereinafter Baracaldo). Regarding claim 7, McMahan in view of Han and further in view of An teaches the server device of claim 5 (and thus the rejection of claim 5 is incorporated). Regarding the limitation wherein the aggregation of the set of locally-updated internal parameter value arrays is performed via federated averaging, McMahan teaches wherein the aggregation of the set of locally-updated internal parameter value arrays is performed (Fig. 3 – 314, ¶152 “At (314), the method (300) can include determining a differentially private aggregate of the local updates”). However, McMahan fails to teach performed via federated averaging. Baracaldo, in the same field of endeavor, teaches performed via federated averaging (Page 2, ¶2 “Once the aggregator has received the model updates from the parties, they can then be merged into a common model… this can be as simple as averaging the weights, as proposed in the FedAvg algorithm,” Page 8, ¶2 “The aggregator’s fusion algorithm F averages the parameters of each party,” Page 9, ¶2 “Using, for example, FedAvg as the fusion function F, we can then compute locally the new local model weights… All parties send their model weights to the aggregator where the weights are averaged”). McMahan and Baracaldo are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the federated averaging of Baracaldo with the locally-updated internal parameter value arrays of McMahan. The motivation to do so is to increase the efficiency of federated learning (Baracaldo, Page 8, ¶2 “FedAvg… is more effective by taking advantage of independent processing at each party… Experiments show that this approach performs well for different model types”). Claims 9-16 recite a computer-implemented method that parallels the apparatus claims of 1-8, respectively. Therefore, the analysis discussed above with respect to claims 1-8 also applies to claims 9-16, respectively. Accordingly, claims 9-16 are rejected based on substantially the same rational as set forth above with respect to claims 1-8, respectively. Claims 17-20 recite a computer-implemented method that parallels the apparatus claims of 1-4, respectively. Therefore, the analysis discussed above with respect to claims 1-4 also applies to claims 17-20, respectively. Accordingly, claims 17-20 are rejected based on substantially the same rational as set forth above with respect to claims 1-4, respectively. Response to Amendments Claim 1 and its associated dependent claims are no longer being interpreted under 35 U.S.C. 112(f) in view of applicant’s amendments to these claims and remarks filed January 8, 2026. Response to Arguments Applicant’s arguments, see page 2 paragraph 2 filed January 8, 2026, with respect to claims 1-8 have been fully considered and are persuasive. The interpretation of claims 1-8 under 35 U.S.C. 112(f) has been withdrawn. Applicant’s arguments and amendments, filed January 8, 2026, regarding the rejections from the previous office action made under 35 U.S.C. 103 have been fully considered but are moot as they do not apply to the references Han and An being used in the current rejections of claims 1, 9, and 17 and their associated dependent claims 2-8, 10-16, and 18-20, respectively, to teach the amended claim limitation directed to “foregoing altering already-trained internal parameters of the pre-trained and frozen neural network but rather sandwiching the pre-trained frozen neural network in series between a newly-inserted trainable input layer and a newly-insertable trainable output layer.” Specifically, Han teaches “foregoing altering already-trained internal parameters of the pre-trained and frozen neural network but rather sandwiching the pre-trained and frozen neural network in series between… a newly-insertable trainable output layer” (Page 3, Column 1, Section III, ¶4). Additionally, the reference An teaches “sandwiching the pre-trained and frozen neural network in series between a newly-inserted trainable input layer” (Fig. 2, Page 2, Column 2, ¶1). With the addition of the references Han and An teaching the subject matter introduced in the amendments, the rejections under 35 U.S.C. 103 stand. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM M LEE whose telephone number is (571)272-4761. The examiner can normally be reached Mon-Fri. 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571)272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM MICHAEL LEE/ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Feb 01, 2023
Application Filed
Nov 04, 2025
Non-Final Rejection — §103, §112
Jan 08, 2026
Response Filed
Mar 25, 2026
Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month