Prosecution Insights
Last updated: April 19, 2026
Application No. 18/170,476

OBFUSCATION OF ENCODED DATA WITH LIMITED SUPERVISION

Non-Final OA §101§103§DP
Filed
Feb 16, 2023
Examiner
WERNER, MARSHALL L
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Protopia AI Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
133 granted / 200 resolved
+11.5% vs TC avg
Strong +44% interview lift
Without
With
+44.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
60 currently pending
Career history
260
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
37.4%
-2.6% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 200 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION This action is in response to the Applicant Response filed 16 February 2023 for application 18/170,476 filed 16 February 2023. Claim(s) 1 is/are pending. Claim(s) 1 is/are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim(s) 1 is/are objected to because of the following informalities: Claim 1, line 3, semicolon should be added to end of line Claim 1, line 6, deterministic layer should read “a deterministic layer” Claim 1, line 7, reconstruction loss should read “a reconstruction loss” Claim 1, line 9, semicolon should be added to end of line Claim 1, line 10, the stochastic noise layers should read “the one or more stochastic noise layers” Claim 1, line 13, the stochastic noise layers should read “the one or more stochastic noise layers” Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim 1 is rejected on the grounds of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,886,955 (18/303,454). Although the claims at issue are not identical, they are not patentably distinct from each other because as noted in the table below, claim 1 of the instant application has similar limitations as recited in U.S. Patent No. 11,886,955 (claim 1) except for additional limitations included in U.S. Patent No. 11,886,955. Instant Application (18/170,476) Pat No. 11,886,955 (18/303,454) Claim 1 Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a dataset obtaining, by a computer system, a training dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of the dataset based on an input of the dataset, training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the training dataset based on an input of the record in the training dataset, wherein the autoencoder comprises deterministic layer and wherein training is based on minimization of reconstruction loss; wherein the autoencoder comprises at least one deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss … adding one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder, adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; and adjusting, with the computer system, parameters of the parametric noise distributions for the dimensions of the one or more stochastic noise layers according to an objective function that is differentiable, storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory; Claim 1 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/532,767 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because, as noted in the table below, claim 1 of the instant application has similar limitations as recited in copending Application No. 18/532,767 (claim 1). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Instant Application (18/170,476) Application No. 18/532,767 Claim 1 Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a dataset obtaining, by a computer system, a dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of the dataset based on an input of the dataset, training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises deterministic layer and wherein training is based on minimization of reconstruction loss; wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss; adding one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder; adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; and adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1 is/are rejected under 35 U.S.C. 101, because the claim(s) is/are directed to an abstract idea, and because the claim elements, whether considered individually or in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. V. CLS Bank International et al., 573 US 208 (2014). Regarding claim 1, the claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to a(n) machine-readable medium, which is directed to an article of manufacture, one of the statutory categories. Step 2A Prong One Analysis: The claim recites a(n) machine-readable medium. The limitation of adding one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder, as drafted, is a process that, under its broadest reasonable interpretation, covers a mental process. The limitation is directed to observation, evaluation, judgment and opinion and is a process capable of being performed by a human mentally or using pen and paper. The limitation of adjusting ... parameters of the stochastic noise layers according to an objective function that is differentiable, as drafted, is a process that, under its broadest reasonable interpretation, covers a mental process. The limitation is directed to observation, evaluation, judgment and opinion and is a process capable of being performed by a human mentally or using pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the "Mental Processes" grouping. Accordingly, the claim recites an abstract idea. Step 2A Prong Two Analysis: With respect to the abstract idea, the judicial exception is not integrated into a practical application. The claim recites additional element(s) – machine-readable medium, instructions, one or more processors, computer system, memory. The additional element(s) is/are recited at a high-level of generality (i.e., as generic computer components performing generic computer functions of executing instructions on the computers) such that it amounts to no more than mere instructions to apply the exception using generic computer components (MPEP 2106.05(b)). The claim recites additional element(s) – one or more machine learning models, autoencoder, deterministic layer, reconstruction loss, one or more stochastic noise layers, objective function. The additional element(s) is/are recited at a high-level of generality such that it amounts to no more than indicating a field of use or technological environment in which to apply the judicial exception (MPEP 2106.05(h)). The claim recites obtaining ... a dataset; storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory, which is simply acquiring and storing data recited at a high level of generality. This is nothing more than insignificant extra-solution activity (MPEP 2106.05(g)). The claim recites training ... one or more machine learning models as an autoencoder to generate as output a reconstruction of the dataset based on an input of the dataset, wherein the autoencoder comprises deterministic layer and wherein training is based on minimization of reconstruction loss which is simply generic training to perform the abstract idea of model generation and amounts to mere instructions to apply the exception (MPEP 2106.05(f)). Accordingly, the additional element(s) do(es) not integrate the abstract idea into a practical application because the additional element(s) do(es) not impose any meaningful limits on practicing the abstract idea, and, therefore, the claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element(s) of: machine-readable medium, instructions, one or more processors, computer system, memory amount(s) to no more than mere instructions to apply the exception using generic computer components (MPEP 2106.05(b)) generic training to perform the abstract idea amount(s) to no more than mere instructions to apply the exception (MPEP 2106.05(f)) acquiring and storing data amount(s) to no more than insignificant extra-solution activity (MPEP 2106.05(g)), wherein the insignificant extra-solution activity is the well-understood routine and conventional activit(y/ies) of receiving or transmitting data over a network and/or storing and retrieving information in memory (MPEP 2016.05(d)) one or more machine learning models, autoencoder, deterministic layer, reconstruction loss, one or more stochastic noise layers, objective function amount(s) to no more than indicating a field of use or technological environment in which to apply the judicial exception (MPEP 2106.05(h)) The additional element(s) do(es) not provide an inventive concept, and, therefore, the claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Malekzadeh et al. (Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Dat Analysis, hereinafter referred to as “Malekzadeh”). Regarding claim 1, Malekzadeh teaches a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations (Malekzadeh, section III – teaches architecture used to run the replacement autoencoder application to protect sensitive data prior to sending it to a server) comprising: obtaining, by a computer system, a dataset (Malekzadeh, section II – teaches obtaining sensitive user data; see also Malekzadeh, sections III, IV.A) training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of the dataset based on an input of the dataset (Malekzadeh, section IV.A. – teaches training an autoencoder to output reconstructed input data; see also Malekzadeh, section IV.B.), wherein the autoencoder comprises deterministic layer (Malekzadeh, section IV.A. – teaches training a deterministic autoencoder) and wherein training is based on minimization of reconstruction loss (Malekzadeh, section IV.A. – teaches the deterministic autoencoder is trained by minimizing a reconstruction loss); adding one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder (Malekzadeh, section IV.B. – teaches adding a replacement layer that stochastically replaces sensitive features with non-sensitive features; Malekzadeh, section VII – teaches perturbing the data using stochastic noise) adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable (Malekzadeh, section IV.B. – teaches training the autoencoder with the replacement layer according to a loss function; Malekzadeh, section V.B. – teaches a mean squared error loss function [differentiable] for training the autoencoder); and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory (Malekzadeh, section III – teaches architecture used to run the replacement autoencoder application to protect sensitive data prior to sending it to a server). Conclusion Any inquiry concerning this communication or earlier communication from the examiner should be directed to MARSHALL WERNER whose telephone number is (469) 295-9143. The examiner can normally be reached on Monday – Thursday 7:30 AM – 4:30 PM ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at (571) 272-7796. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARSHALL L WERNER/ Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Feb 16, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585968
SYSTEM AND METHOD FOR TESTING MACHINE LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12579111
CROSS-DOMAIN STRUCTURAL MAPPING IN MACHINE LEARNING PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12568890
Apparatus and Method for Controlling a Growth Environment of a Plant
2y 5m to grant Granted Mar 10, 2026
Patent 12554967
USING NEGATIVE EVIDENCE TO PREDICT EVENT DATASETS
2y 5m to grant Granted Feb 17, 2026
Patent 12547918
Stochastic Control with a Quantum Computer
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+44.3%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 200 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month