Prosecution Insights
Last updated: April 19, 2026
Application No. 18/252,559

DATA PROTECTION METHOD, APPARATUS, MEDIUM AND DEVICE

Non-Final OA §101
Filed
May 11, 2023
Examiner
TRAN, QUOC A
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Lemon Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
590 granted / 735 resolved
+25.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
21 currently pending
Career history
756
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
43.1%
+3.1% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 735 resolved cases

Office Action

§101
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is F.A.O.M in responses to Patent Application filed 05/11/2023, in view of the “Preliminary Amendments” filed 05/11/2023. It is noted the current patent application is a National Stage entry of PCT/SG2021/050681, International Filing Date: 11/06/2021; claims foreign priority to 202011271081.0, filed 11/13/2020. Claims 1-11, 13-14 and 17-23 are pending. Claim(s) 3 and 13-14 have been amended. Claim(s) 1-2 and 4-11 were original. Claim(s) 12 and 15-16 are cancelled. Claim(s) 17-23 are new. Also, Examiner is acknowledged the “Preliminary Amendments to the Specification(s)/Title ” filed 05/11/2023. In addition, in the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement A signed and dated copy of applicant’s IDS, which was filed 08/09/2023, 02/05/2024 and 02/13/2025 is/are attached to this Office Action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-11, 13-14 and 17-23 fail to recite statutory subject matter, as defined in 35 U.S.C. 101, because: The claimed invention is/are directed to a judicial exception (i.e., abstract idea) without significantly more. Step 1: YES (Claim(s) is/are process, machine, manufacture or composition of the matter). ... “acquiring gradient correlation information” respectively corresponding to reference samples of a target batch of an active participant of a joint “training model”; “determining a constraint condition for data noise” to be added according to proportions of a reference sample of a positive example and a reference sample of a negative example respectively in all the reference samples of the target batch; “determining information of the data noise” to be added according to the gradient correlation information corresponding to the reference samples and the constraint condition; “correcting an initial gradient transfer value” corresponding to each of the reference samples according to the information of the data noise to be added to obtain target gradient transfer information, wherein the target gradient transfer information is consistent for reference samples corresponding to different sample labels in the target batch; and “sending the target gradient transfer information to a passive participant of the joint training model”, so that the passive participant adjusts a parameter of the joint training model according to the target gradient transfer information... and therefore, fall into one of the four categories of patent eligible subject matter (process, machine, manufacture or composition of the matter). Step 2A : Prong One: ( whether a claim recites a judicial exception ?) the claim(s) recite process, machine, manufacture or composition of the matter... for “acquiring gradient correlation information” respectively corresponding to reference samples of a target batch of an active participant of a joint “training model”; “determining a constraint condition for data noise” to be added according to proportions of a reference sample of a positive example and a reference sample of a negative example respectively in all the reference samples of the target batch; “determining information of the data noise to be added” according to the gradient correlation information corresponding to the reference samples and the constraint condition; “correcting an initial gradient transfer value” corresponding to each of the reference samples according to the information of the data noise to be added to obtain target gradient transfer information, wherein the target gradient transfer information is consistent for reference samples corresponding to different sample labels in the target batch; and “sending the target gradient transfer information to a passive participant of the joint training model”, so that the passive participant adjusts a parameter of the joint training model according to the target gradient transfer information ...These limitation(s) recite mental processes and mathematical concepts (mathematical calculations)....since “acquiring gradient correlation information” ...of a joint “training model”; and “determining a constraint condition for data noise” to be added according to proportions of a reference sample of a positive example and a reference sample of a negative example respectively in all the reference samples of the target batch; ... “correcting an initial gradient transfer value” corresponding to each of the reference samples according to the information of the data noise to be added to obtain target gradient transfer information, wherein the target gradient transfer information is consistent for reference samples corresponding to different sample labels in the target batch; and “sending the target gradient transfer information to a passive participant of the joint training model”, is high level mathematical concepts (mathematical calculations). ..[See PGPUB 20240005210 A1 Para(s) 34 and 44-54 also equation(s) 1-6 for details]....moreover, the claim(s) recite only the idea of a solution or outcome i.e., “acquiring gradient correlation information” ...of a joint “training model” and “determining a constraint condition for data noise” to be added [“APPLY IT]. Step 2A : Prong Two: and Step 2B: (Do the claim(s) recite “additional element(s) that integrate the “Judicial Exception” into “A Practical Application” ? The claim(s) recite additional limitation(s) such as ... “and sending the target gradient transfer information to a passive participant of the joint training model, so that the passive participant adjusts a parameter of the joint training model according to the target gradient transfer information.”. The limitations of sending the gradient information to a passive participant who adjusts a parameter only amount to insignificant extra-solution activity of data gathering (MPEP 2106.05(g)). As to the dependent claim(s) 2-11, 17-20 and 21-23, further recite, addition limitation(s) such as, variance of the data noise, sum of a product, trace of a matrix of covariance information of the data noise, positive/negative example less than or equal to a target value of a preset hyper-parameter, preset hyper-parameter, parameter condition, initial gradient transfer value, an error of label prediction, preset hyper-parameter does not meet the parameter condition, error threshold, sample label/sample class, gradient correlation information, mixed prediction error. maximizes a minimum value of the mixed prediction error, prediction error rate, a weighted sum, gradient of a preset loss function, neuron in an output layer of a sub-model trained by the passive participant of the joint training mode, fixed proportion or a gradually decreased dynamic proportion, calculate a L2-norm value, where the L2-norm value is greater than a preset threshold, and determine the prediction label corresponding to the reference sample as the negative example in a case where the L2-norm value is less than or equal to the preset threshold, ...etc., These limitation(s) only amounts to mere instructions to implement the abstract idea ...and do not include elements that amount to significantly more than the abstract idea and are also rejected under the same rational. Accordingly, claims 1-11, 13-14 and 17-23 fail to recite statutory subject matter, as defined in 35 U.S.C. 101. Allowable Subject Matter Claim(s) 1-11, 13-14 and 17-23 would be allowable if rewritten and/or amending to remedy the 101 rejection(s). Reason for Allowance Under the broadest reasonable interpretation of the claimed limitation which is consistence with the Applicant's Specification, the prior arts of recorded when taken individually or in combination do not expressly teach or render obvious the limitations recited in claims 1, 13 and 14 when taken in the context of the claims as a whole, especially the concept of, “... data protection method, acquiring gradient correlation information respectively corresponding to reference samples of a target batch of an active participant of a joint training model; wherein determining a constraint condition for data noise to be added according to proportions of a reference sample of a positive example and a reference sample of a negative example respectively in all the reference samples of the target batch; and determining information of the data noise to be added according to the gradient correlation information corresponding to the reference samples and the constraint condition; wherein correcting an initial gradient transfer value corresponding to each of the reference samples according to the information of the data noise to be added to obtain target gradient transfer information, wherein the target gradient transfer information is consistent for reference samples corresponding to different sample labels in the target batch; and sending the target gradient transfer information to a passive participant of the joint training model, so that the passive participant adjusts a parameter of the joint training model according to the target gradient transfer information ...” as claimed and further supports in the current specifications in Para 22 and the Abstract In addition, neither a reference uncovered that would have provided a basis of evidence for asserting a motivation, nor at the time before the effective filing date of the claimed invention was made, knowing the teaching of the prior arts of record would have combined them to arrive at the present invention as recited in the context of independent claims 1, 13 and 14 as a whole. Thus, claims 1, 13 and 14 are allowed over the prior arts of record. Dependent claims 2-11 and 17-23 are also allowable due to its dependency of independent claims 1, 13 and 14; if rewritten and/or amending to remedy the 101 rejection(s). Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sun et al., (“US 20220383054 A1” benefits from 202010640985.X (CN) 07/06/2020, relates to a data protection ... acquiring gradient associated information, which respectively corresponds to a target sample that belongs to a binary classification sample set with unbalanced distribution and a reference sample that belongs to the same batch as the target sample; wherein generating information of data noise to be added; according to the information of said data noise, correcting an initial gradient transfer value corresponding to the target sample, such that corrected gradient transfer information corresponding to samples in the sample set that belong to different types is consistent; and sending the gradient transfer information to a passive party of a joint training model. By means of the embodiment, there is no significant difference between corrected gradient transfer information corresponding to positive and negative samples, thereby effectively protecting the security of data... [Abstract and Para(s) 5-17]. Yan et al., NPL (“A Method of Information Protection for Collaborative Deep Learning under GAN Model Attack” Published 2019 by IEEE/ACM 11 pages, relates to a deep learning; is widely used in the medical field owing to its high accuracy in medical image classification and biological applications. However, under collaborative deep learning, there is a serious risk of information leakage based on the deep convolutional generation against the network’s privacy protection method. Moreover, the risk such information leakage is greater in the medical field. This paper proposes a deep convolution generative adversarial networks (DCGAN) based privacy protection method to protect the information of collaborative deep learning training and enhance its stability. The proposed method adopts encrypted transmission in the process of deep network parameter transmission. By setting the buried point to detect a generative adversarial network (GAN) attack in the network and adjusting the training parameters, training based on the GAN model attack is forced to be invalid, and the information is effectively protected... [Abstract]. Zhu et al., NPL (“Deep Leakage from Gradients ” Published 2019 by Conference on Neural Information Processing Systems (2019), Vancouver, Canada, 11 pages, relates to exchanging gradients; is a widely used method in modern multi-node machine learning system (e.g., distributed training, collaborative learning). For a long time, people believed that gradients are safe to share: i.e., the training data will not be leaked by gradients exchange. However, we show that it is possible to obtain the private training data from the publicly shared gradients. We name this leakage as Deep Leakage from Gradient and empirically validate the effectiveness on both computer vision and natural language processing tasks. Experimental results show that our attack is much stronger than previous approaches: the recovery is pixelwise accurate for images and token-wise matching for texts. Thereby we want to raise people’s awareness to rethink the gradient’s safety. We also discuss several possible strategies to prevent such deep leakage. Without changes on training setting, the most effective defense method is gradient pruning... [Abstract]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC A TRAN whose telephone number is (571)272-8664. The examiner can normally be reached Monday-Friday 9am-5pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUOC A TRAN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Jan 26, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586003
Method and Apparatus for Generating Operator
2y 5m to grant Granted Mar 24, 2026
Patent 12585951
METHOD AND ELECTRONIC DEVICE FOR GENERATING OPTIMAL NEURAL NETWORK (NN) MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12572772
SCALABLE DIGITAL TWIN SERVICE SYSTEM AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12561617
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561610
METHOD AND APPARATUS FOR PRESENTING CANDIDATE CHARACTER STRING, AND METHOD AND APPARATUS FOR TRAINING DISCRIMINATIVE MODEL
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 735 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month