Prosecution Insights
Last updated: April 19, 2026
Application No. 18/572,937

METHOD AND APPARATUS FOR JOINTLY TRAINING NATURAL LANGUAGE PROCESSING MODEL BASED ON PRIVACY PROTECTION

Final Rejection §101§103§112
Filed
Jul 22, 2024
Examiner
CARNES, THOMAS A
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Alipay (Hangzhou) Information Technology Co., Ltd.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
47 granted / 70 resolved
+9.1% vs TC avg
Strong +73% interview lift
Without
With
+73.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
95
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
24.7%
-15.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 70 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This Office Action is in response to the communication filed on 1/12/2026. Claims 6 and 12 have been canceled. Claims 1-5, 7-11 and 13-14 are pending. Claims 1-5, 7-11 and 13-14 are rejected. The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections The claim objections are withdrawn due to Applicant’s amendments. Claim Rejections - 35 USC § 112 The claim rejections under 112 are withdrawn due to Applicant’s amendments. Claim Rejections - 35 USC § 101 The 101 rejection is withdrawn due to updated guidance which was provided to examiner on December 5th, 2025. In Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were deemed to be outside any specific, enumerated judicial exception (Step 2A: NO). The claim limitations provide sufficient details and support to reflect the improvement identified in the specifications. Response to Arguments Applicant's arguments filed 1/12/2026 have been fully considered but they are not persuasive. Applicant argues that Qian in view of Lai does not teach (a) noise power, (b) privacy budget and that (c) no motivation to combine exists (that examiner used hindsight reasoning and the combination has no reasonable expectation of success) In response to applicant's argument that the references fail to show certain features of the invention, specifically noise power. However the references do disclose noise power because noise power can be broadly interpreted as any parameter controlling the magnitude of noise. Lai discloses noise scale which is used to determine a boundary for noise. Therefore Lai broadly teaches noise power. The claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In response to applicant's argument that the references fail to show certain features of the invention specifically, privacy budget. Privacy budget can be broadly interpreted as anything which limits privacy loss in differential privacy. Lai teaches creating a boundary for total privacy loss, which limits the privacy loss at the boundary created. Therefore Lai teaches privacy budget. The claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Applicant additionally recited “no reasonable expectation of success” to argue against obviousness. 2143.02 (I) recited “The reasonable expectation of success requirement refers to "the likelihood of success" in combining or modifying prior art disclosures to meet the limitations of the claimed invention.”. Lai teaches the same concepts as applicant. The claims are interpreted under BRI, the only difference between the claims of the instant application and the prior art is the different language used to describe same concepts. If a person skilled in differential privacy wanted to limit noise magnitude (noise power) they would reasonably use the noise scale of Lai and if they wanted to limit privacy loss (privacy budget) they would reasonably use the boundary for privacy loss of Lai. They are the same concepts being used in the same way in the same field of endeavor. Therefore a person of ordinary skill would have a reasonable expectation of success if they incorporated the teachings of Lai into the disclosure of Qian. Applicant argues that dependent claims are allowable based on respective allowable base claims, however the base claims are not in condition for allowance therefore the dependent claim are not allowable. Examiner’s Note: Examiner is interpreting clipping operation to be an obfuscation operation (adding noise) where plaintext data is blurred. The noise power determines how much noise to add (how much clipping to perform) which is set according to a “budget”, a budget is set according to an amount of resources allocated, an amount of resources allocated is includes cost to perform operations (adding noise) (i.e. more important data = more money/resources are allocated to protecting it = more noise can be added). See instant application [0015-0018], [0063], [0065-0073], [0076-0080]. Examiner is interpreting this to mean that noise will be added more frequently on more important data. For example, given sentence “jointly training a natural language processing model” Less important with lower budget: jointly training (noise) a natural language (noise) processing model More important with higher budget: jointly (noise) training (noise) a (noise) natural (noise) language (noise) processing (noise) model Here we have noise being inserted more frequently when the budget allows for it. Therefore, applicant is simple saying, performing more secure encryption when data needs more security. Applicant is performing this “more secure encryption” by adding noise more frequently. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 7-11, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Qian (U.S. 20220269816), in view of Lai (US 20230059367). Regarding Claims 1, 13 and 14, Qian discloses: A method for jointly training a natural language processing (NLP) model based on privacy protection, wherein the NLP model comprises an encoding network located at a first party and a processing network located at a second party, and (Qian [0002-0007] Method for obtaining, by an application executing on a processor of an electronic device, user data of a user… sending the transform of the representation of the user data, to a service provider via a network and receiving, from the service provider, via the network, service data based on the transform of the user data) the method is performed by the first party and comprises: obtaining a target training statement; (Qian [0005-0007] obtaining, by an application executing on a processor of an electronic device, user data of a user; [0092] user data may comprise emergency commands (i.e., statements such as, “Call an ambulance.”)) inputting the target training statement to the encoding network, and forming a sentence representation vector based on an encoding output of the encoding network; and (Qian [0048] According to certain embodiments, at block 307, device 301 generates a representation of the set of user data 305... obtaining a vector representation of user data 305… encoding the user data 305; [Fig. 8, [0090-0092] The operations described with reference to FIG. 8 may be performed on any suitable apparatus on which user data is generated and collected (for example, instances of device 100 in FIG. 1, device 301 in FIG. 3, device 401 in FIG. 4, device 501 in FIG. 5 or device 701 in FIG. 7), and which passes the collected user data to one or more service provider platforms; [0092] As shown in the illustrative example of FIG. 8, at operation 810, the processor generates a representation of the user data obtained at operation 805. According to some embodiments, generating the representation of the data comprises expressing the data in a small vector format; [0092] user data may comprise emergency commands (i.e., statements such as, “Call an ambulance.”) Note: when “Call an ambulance.” Is vectorized, the vector will represent the sentence “Call an ambulance.”) Qian does not explicitly teach, however, Lai teaches determining noise power for the target training statement based on a preset privacy budget; (Lai [0002-0022]; [0043-0063]; [0074]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches determining a noise scale based on a privacy budget and a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β) obtaining target noise that conforms to differential privacy through sampling from a noise distribution determined based on the noise power; and (Lai [0002-0022]; [0043-0063]; [0074]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches an iterative training process (current iteration round) where data (samples) is used in the current round and a probability including a probability relating to the sensitive user samples; determining a noise scale based on a privacy budget and a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β; At each iteration t, the user-entity differential privacy system 106 randomly samples U.sup.t users from U and E.sup.t sensitive entities from E with sampling rates q.sub.u and q.sub.e, respectively (lines 7 and 9) adding the target noise to the sentence representation vector, to obtain a target noise addition representation, wherein the target noise addition representation is sent to the second party for training of the processing network. (Qian [0003-0007]; sending the transform of the representation of the user data, to a service provider via a network and receiving, from the service provider, via the network, service data based on the transform of the user data. The service data includes a user-specific output based on the transform of the user data; [0057] a hash salt, wherein a set of user-specific nonce data (also referred to as a “salt”) is added to the user data prior to hashing; [Fig. 3-309, 3-311, 0050-0056, Claims 4, 6, 13 and 20] applying local differential privacy comprises adding noise to the locally encoded data) Qian and Lai are analogous art because they are from the same field of endeavor privacy preserving natural langue processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Qian and Lai before him or her, to modify the method of Qian to include the privacy budget, noise scale and noise power of Lai because it will allow for budgets to be considered when inserting noise for protecting data and increase the data that can be protected in differential privacy. The motivation for doing so would be [“user-entity differential privacy system produces a natural language model that performs a natural language task with an accuracy that approaches comparability with a model that provides no data security”] (Paragraph 0054 and 0097 by Lai)]. Therefore, it would have been obvious to combine Qian and Lai to obtain the invention as specified in the instant claim. Claim 13 is rejected under the same rationale as claim 1 above, in addition, Qian further discloses: A computing device (Qian [0002-0010] a processor of an electronic device) Claim 14 is rejected under the same rationale as claim 1 above, in addition, Qian r discloses: A non-transitory computer-readable storage medium (Qian [0002-0010] a non-transitory, computer-readable medium contains instructions, that when executed by a processor, cause an apparatus) Regarding Claim 2, Qian in view of Lai discloses: The method according to claim 1, wherein the obtaining a local target training statement comprises: performing sampling from a total local sample set based on a sampling probability p, to obtain a sample subset used for a current iteration round; and (Qian [0048, 0060] According to certain embodiments, at block 307, device 301 generates a representation of the set of user data 305. Depending on embodiments, generating a representation of the user data may comprise… performing t-distributed stochastic neighborhood embedding (t-SNE) reading the target training statement from the sample subset. (Qian [0080] While some service provider platforms generate service data based on aggregative analyses across a full set of user data, other service provider platforms generate service data based on AI/ML analyses, which perform targeted analyses to obtain a result from only a subset of the available data) Regarding Claim 3, Qian in view of Lai discloses: The method according to claim 1, wherein the forming a sentence representation vector based on an encoding output of the encoding network comprises: obtaining a character representation vector obtained after the encoding network encodes each character in the target training statement; and (Qian [0052-0054]; [0071-079]; [0081-0092] According to various embodiments, at block 507, user data 505 is passed to TEE 511, wherein the feature names within the user data are encoded. In certain embodiments, the feature names are encoded as key-value pairs of an n-gram frequency vector, wherein n refers to the number of items within a key of a key-value pair) forming the sentence representation vector based on… a representation vector. (Qian [0048-0054]; [0071-079]; [0081-0092] teaches transforming a representation of user data to a “sentence” representation, in addition to teaching perturbation to protect user’s privacy) While Qian discloses the representation vector and “normal word” size limits Qian does not explicitly disclose: performing a clipping operation based on a preset clipping threshold on the character representation vector of each character; and a clipped character However, in the same field of endeavor Lai discloses: performing a clipping operation based on a preset clipping threshold on the character representation vector of each character, and (Lai [0015-0022]; [0043]; [0068-0070]; [0080-0086, Algorithm 1]; [0090-0097] teaches In one or more embodiments, a clipping model includes a computer implemented model that clips (e.g., bounds) a value to satisfy a value limit. In particular, in some embodiments, a clipping model utilizes a value that exceeds a value limit to generate a new value within that value limit. For instance, in some implementations, a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β) a clipped character (Lai [0074] teaches In one or more embodiments, a clipping model includes a computer implemented model that clips (e.g., bounds) a value to satisfy a value limit. In particular, in some embodiments, a clipping model utilizes a value that exceeds a value limit to generate a new value within that value limit. For instance, in some implementations, a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β) Qian and Lai are analogous art because they are from the same field of endeavor privacy preserving natural langue processing. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Qian and Lai before him or her, to modify the method of Qian to include the privacy budget, noise scale and clipped characters of Lai because it will allow for budgets to be considered when inserting noise for protecting data and increase the data that can be protected in differential privacy. The motivation for doing so would be [“user-entity differential privacy system produces a natural language model that performs a natural language task with an accuracy that approaches comparability with a model that provides no data security”] (Paragraph 0054 and 0097 by Lai)]. Therefore, it would have been obvious to combine Qian and Lai to obtain the invention as specified in the instant claim. Regarding Claim 4, Qian in view of Lai discloses: The method according to claim 3, wherein the clipping operation based on a preset clipping threshold comprises: Lai further discloses: upon determining that a current norm value of the character representation vector exceeds the clipping threshold, determining a ratio of the clipping threshold to the current norm value, and clipping the character representation vector based on the ratio. (Lai [0015-0022]; [0043]; [0068-0070]; [0080-0086, Algorithm 1]; [0090-0097] teaches In one or more embodiments, a clipping model includes a computer implemented model that clips (e.g., bounds) a value to satisfy a value limit. In particular, in some embodiments, a clipping model utilizes a value that exceeds a value limit to generate a new value within that value limit. For instance, in some implementations, a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Lai for similar reasons as cited in claim 1. Regarding Claim 7, Qian in view of Lai discloses: The method according to claim 1, wherein the determining noise power for the target training statement based on a preset privacy budget comprises: determining, based on the clipping threshold, sensitivity corresponding to the target training statement; and (Qian [0052-0054]; [0071-079]; [0081-0092] teaches determining the sensitivity of the target data according to privacy policies which are based on performing transformations) Lai further discloses: determining the noise power for the target training statement based on a preset single-sentence privacy budget and the sensitivity. (Lai [0002-0022]; [0043-0063]; [0074]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches determining a noise scale based on a privacy budget and a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Lai for similar reasons as cited in claim 1. Regarding Claim 8, Qian in view of Lai discloses: The method according to claim 1, wherein the determining noise power for the target training statement based on a preset privacy budget comprises: Lai further discloses: determining the noise power for the target training statement based on the target budget information. (Lai [0002-0022]; [0043-0063]; [0074]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches determining a noise scale based on a privacy budget and a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β; determining a target for privacy budget and noise scale for the current iteration of the iterative process based on the pre-determined privacy budget) determining target budget information for a current iteration round t based on a preset total privacy budget used for a total quantity T of iteration rounds; (Lai [0052-0056]; [0080-0086, Algorithm 1]; [0090-0097] teaches determining a target for privacy budget and noise scale for the current iteration of the iterative process based on the pre-determined privacy budget) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Lai for similar reasons as cited in claim 1. Regarding Claim 9, Qian in view of Lai discloses: The method according to claim 8, Lai further discloses: wherein the target training statement is obtained through sequential reading from a sample subset used for the current iteration round t, and the sample subset is obtained through sampling from a total local sample set based on a sampling probability p; (Lai [0052-0063]; [0080-0086, Algorithm 1]; [0090-0097] teaches an iterative training process (current iteration round) where data (samples) is used in the current round and a probability including a probability relating to the sensitive user samples) the determining target budget information for the current iteration round t comprises: converting the total privacy budget into a total privacy parameter value in Gaussian differential privacy space; and (Lai [0002-0022]; [0052-0063]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches a noise scale derived from both user information and sensitive entity information to inject random Gaussian noise into the parameters of the natural language model; At each iteration t, the user-entity differential privacy system 106 randomly samples U.sup.t users from U and E.sup.t sensitive entities from E with sampling rates q.sub.u and q.sub.e, respectively (lines 7 and 9)) determining a target privacy parameter value for the current iteration round t in the Gaussian differential privacy space based on the total privacy parameter value, the total quantity T of iteration rounds, and the sampling probability p; and (Lai [0002-0022]; [0052-0063]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches determining a target for privacy budget and noise scale for the current iteration of the iterative process based on the pre-determined privacy budget) the determining the noise power for the target training statement based on the target budget information comprises: determining the noise power based on the target privacy parameter value, the clipping threshold, and a quantity of characters in each training sentence in the sample subset. (Lai [0002-0022]; [0043-0063]; [0074]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches determining a noise scale based on a privacy budget and a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β; At each iteration t, the user-entity differential privacy system 106 randomly samples U.sup.t users from U and E.sup.t sensitive entities from E with sampling rates q.sub.u and q.sub.e, respectively (lines 7 and 9)) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Lai for similar reasons as cited in claim 1. Regarding Claim 10, Qian in view of Lai discloses: The method according to claim 9, wherein the determining a target privacy parameter value for the current iteration round t comprises: Lai further discloses: deriving the target privacy parameter value based on a first relational expression for calculating the total privacy parameter value in the Gaussian differential privacy space, wherein the first relational expression shows that the total privacy parameter value is directly proportional to the sampling probability p and a square root of the total quantity T of iteration rounds, and depends on a result of a power operation in which a natural exponent e is used as a base and the target privacy parameter value is used as an exponent. (Lai [0002-0022]; [0043-0063]; [0074]; [0080-0086, Algorithm 1]; [0090-0097]; [0123-0127] teaches determining a noise scale based on a privacy budget and a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β and the user-entity differential privacy system generates the one or more parameters using the average gradient and the noise scale (e.g., the Gaussian noise generated from the noise scale) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Lai for similar reasons as cited in claim 1. Regarding Claim 11, Qian in view of Lai discloses: The method according to claim 1, wherein the encoding network is implemented by using one of the following neural networks: a long short-term memory network (LSTM), a bidirectional LSTM, and a transformer network. (Quai [0059]; [0083] As shown in the explanatory example of FIG. 4, set of representation functionalities may also include a transformer functionality of the type often used in natural language processing (NLP)) Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Qian (U.S. 20220269816), in in view of Lai (US 20230059367) and in further view of Sathaye (U.S. 20150205939). Regarding Claim 5, Qian in view of Lai discloses: The method according to claim 3, wherein the forming the sentence representation vector based on a clipped character representation vector comprises: vectors of all the characters to form the sentence representation vector. (Qian [0052-0054]; [0071-079]; [0081-0092] teaches transforming a representation of user data to a “sentence” representation, in addition to teaching perturbation to protect user’s privacy; Qian [Equation 1]; 0081-0092] teaches adjusting the “noisiness of the LDP”) Lai further discloses: clipped character representation (Lai [0074] teaches In one or more embodiments, a clipping model includes a computer implemented model that clips (e.g., bounds) a value to satisfy a value limit. In particular, in some embodiments, a clipping model utilizes a value that exceeds a value limit to generate a new value within that value limit. For instance, in some implementations, a clipping model clips the one or more gradients determines for a user so that its l2-norm is bounded by a pre-defined gradient clipping bound β) While Qian in view of Lai discloses character representations as part of sentence vectors and Lai discloses clipping characters and injecting noise according to a budget and noise scale Qian in view of does not explicitly disclose: splicing However, in the same field of endeavor Sathaye discloses: splicing (Sathaye [Fig. 3-5]; [0055] In an implementation, the shielded data 130 may be further shielded by splitting it into N segments of shielded data (432, 434)) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify with Lai for similar reasons as cited in claim 1. Qian in view of Lai and Sathaye are analogous art because they are from the same field of endeavor data protection. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Qian in view of Lai and Sathaye before him or her, to modify the method of Qian in view of Lai to include the data splitting of Sathaye because it will noise to be inserted between the splits to increase security in untrusted environments. The motivation for doing so would be [“A communications agent securely coupled to the trusted agent is operable to securely transfer one or more of the N segments of shielded data to one or more storage devices in untrusted computing environments.”] (Sathaye [Abstract]). Therefore, it would have been obvious to combine Qian in view of Lai and Sathaye to obtain the invention as specified in the instant claim. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. GE 9/30/2020 (CN 112199717) teaches a privacy model training method based on a small amount of public data and electronic device, comprising: using several training to obtain N neural network teacher models; respectively inputting a small amount of public data xi into N neural network teacher models; obtaining the statistical voting result of each common data xi for each tag k; adding noise to each statistical voting result; obtaining public data xi satisfying differential privacy principle and corresponding label; judging the neural network by a large amount of random noise vector and a pre-training, optimizing the resistance generating network, and generating a lot of non-marked data; by satisfying the differential privacy principle of public data xi and corresponding label, a lot of non-marked data for pre-training the self-encoder combined training student model to obtain the privacy student model. The invention only needs a small amount of public data to train a private student model, realizing the physical isolation and network isolation of the sensitive data, solving the problem that the precision of the privacy student model is not high. Stephenson 4/11/2021 (US 20210314140 ) teaches a system for an artificial intelligence synchronized distributed ledger. The system includes a computing device containing a receiving module, the receiving module designed and configured to receive an input from a remote device, parse the input to identify protected and non-protected data contained within the input, transform the protected data into a digitally signed assertion and convert the non-protected into an encrypted datastore. The computing device containing a processing module, the processing module designed and configured to receive the digitally signed assertion from the receiving module, insert the digitally signed assertion into an immutable sequential data structure, receive the encrypted datastore, retrieve at least an input, generate a record utilizing the at least a retrieved input, and perform a first machine-learning process utilizing the at least a retrieved input. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS A CARNES whose telephone number is (571)272-4378. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached at (571) 272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. THOMAS A. CARNES Examiner Art Unit 2436 /THOMAS A CARNES/Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Aug 20, 2024
Response after Non-Final Action
Feb 28, 2025
Response after Non-Final Action
Oct 10, 2025
Non-Final Rejection — §101, §103, §112
Jan 12, 2026
Response Filed
Feb 27, 2026
Examiner Interview (Telephonic)
Mar 02, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12556566
SYSTEMS AND METHODS FOR DYNAMIC VULNERABILITY SCORING
2y 5m to grant Granted Feb 17, 2026
Patent 12538130
SYSTEMS AND METHODS FOR RUNNING MULTIPLE LOGICAL SECURE ELEMENTS ON THE SAME SECURE HARDWARE
2y 5m to grant Granted Jan 27, 2026
Patent 12524566
RESTRICTED FULLY PRIVATE CONJUCTIVE DATABASE QUERY FOR PROTECTION OF USER PRIVACY AND IDENTITY
2y 5m to grant Granted Jan 13, 2026
Patent 12488141
SYSTEM AND METHOD FOR PRIVACY-PRESERVING DISTRIBUTED TRAINING OF NEURAL NETWORK MODELS ON DISTRIBUTED DATASETS
2y 5m to grant Granted Dec 02, 2025
Patent 12401525
VEHICLE TEMPORARY CERTIFICATE AUTHENTICATION
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+73.2%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 70 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month