Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,767

SELF-SUPERVISED DATA OBFUSCATION IN FOUNDATION MODELS

Non-Final OA §101§102§103§DP
Filed
Dec 07, 2023
Examiner
ALABI, OLUWATOSIN O
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Protopia AI Inc.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
85%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
116 granted / 199 resolved
+3.3% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
45 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 199 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims the benefit a continuation of U.S. Patent Application 18/303,454, filed 19 April 2023, which is a continuation-in-part of U.S. Patent Application 18/170,476, filed 16 February 2023, which claims the benefit of U.S. Provisional Patent Application 63/311,014, filed 16 February 2022, and claims the benefit of U.S. Provisional Patent Application 63/420,287, filed 28 October 2022, which are acknowledged. Drawings The drawings were received on 12/07/2023. These drawings are acceptable. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-6, 12, 14, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6, 12, 14, and 20 of in light of US Patent No. 11886955. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the patent anticipate the boarder limitation in the instant application. See table below. US Patent No. 11886955, hereinafter ‘RefDoc’ teaches the limitations as highlighted in the table and analysis below: U.S. Application No. 18532767 Examiner notes: U.S. Patent No. 11886955 (Reference Patent, hereinafter ‘RefDoc’) Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss; adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder; adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. Examiner notes that the RefDoc limitations anticipates the limitations in the instant case as the instant claim limitations are boarder in scope from the RefDoc, as noted in the comparison. The claims in the instant case are anticipated by the more narrow claim limitations in the RefDoc. Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a training dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the training dataset based on an input of the record in the training dataset, wherein the autoencoder comprises at least one deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss and wherein the record is multi-dimensional; adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder, wherein the one or more stochastic noise layers are additional layers among the layers of the trained one or more machine learning models of the autoencoder, wherein each of the stochastic noise layers inject noise by, for each of at least a plurality of dimensions of the stochastic noise layer, sampling from a parametric noise distribution for respective dimensions, and adding for each stochastic noise layer, noise to an encoded representation of a record at the respective stochastic noise layer, and wherein the parametric noise distributions for each of the at least the plurality of dimensions are independent; adjusting, with the computer system, parameters of the parametric noise distributions for the dimensions of the one or more stochastic noise layers according to an objective function that is differentiable, wherein the objective function comprises a measure of adversarial loss; storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory; obtaining, by the computer system, one or more records from a second dataset; generating, with the stored one or more machine learning models of the autoencoder with the stochastic noise layers, one or more obfuscated records for the one or more records from the second dataset; and transmitting, by the computer system, the one or more obfuscated records by an untrusted network. Claim 2 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; and storing, with the computer system, the trained obfuscation transform in memory. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above. Claim 2 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a access to a machine learning model, wherein the machine learning model is a trained machine learning model, wherein the machine learning model is a foundation model, and wherein the machine learning model generates embeddings based on input records; obtaining, by the computer system, a training data set; determining, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision, wherein determining the obfuscation transform comprises:obtaining training embeddings based on the training set from the machine learning model; generating the obfuscation transform based on dimensionality of the training embeddings, wherein the obfuscation transform injects noise by, for each of at least a plurality of dimensions of the training embedding, sampling from a parametric noise distribution and applying the sampled parametric noise to a given embedding; training the obfuscation transform by adjusting parameters of the parametric noise distribution according to an objective function that is differentiable, wherein the object function comprises a measure of mutual information and a measure of data loss; and storing, with the computer system, the trained obfuscation transform in memory. Claim 3: The medium of claim 2, wherein the machine learning model is a generative artificial intelligence (Al) model trained with self-supervision, and the trained obfuscation transform is configured to transform records into obfuscated records that are correctly processed by the machine learning model despite the obfuscation. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above. Claim 3: The medium of claim 2, wherein the machine learning model is a generative artificial intelligence (AI) model trained with self-supervision, and the trained obfuscation transform is configured to transform records into obfuscated embeddings that are correctly processed by the machine learning model despite the obfuscation transform. Claim 4 The medium of claim 2, wherein the machine learning model is a foundation model, where the foundation model is operative to perform a plurality of tasks at inference time with capabilities that emerged during training and were not explicitly measured by an objective function used to train the foundation model. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above. Claim 4 The medium of claim 2, wherein the foundation model is operative to perform a plurality of tasks at inference time with capabilities that emerged during training and were not explicitly measured by an objective function used to train the foundation model. Claim 5 The medium of claim 2, wherein training the obfuscation transform comprises: adding an obfuscation transform to at least one of the training data set and the machine learning model; and adjusting parameters of the obfuscation transform according to an objective function that is differentiable. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above. Claim 5 The medium of claim 2, the operations further comprising: applying the trained obfuscation transform to at least one of a production data set and within the machine learning model, wherein the obfuscation transform generates an obfuscated embedding based on an input record, and wherein adjusting parameters of the objective function comprises optimizing the objecting function by minimizing mutual information between records of the training data set and obfuscated embeddings corresponding to the training embeddings and minimizing data loss between the records of the training data set and the obfuscated embeddings corresponding to the training embeddings. Claim 6 The medium of claim 2, wherein the obfuscation transform comprises a stochastic noise layer and wherein training the obfuscation transform comprises determining parameters of distribution of stochastic noise of the stochastic noise layer. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above Claim 6 The medium of claim 2, wherein the obfuscation transform comprises a stochastic noise layer of substantially the same dimensionality as the training embeddings, wherein the stochastic noise layer injects noise independently for each of the at least the plurality of dimensions of the given embedding, and wherein training the obfuscation transform comprises determining parameters of distribution of stochastic noise for each of the dimensions of the stochastic noise layer. Claim 12: The medium of claim 2, further comprising tuning the machine learning model based on the training data set. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above Claim 12: The medium of claim 2, further comprising tuning the machine learning model to perform an inference task based on the training data set and a second objective function that is differentiable Claim 13 The medium of claim 12, further comprising deploying the tuned machine learning model. Claim 13 The medium of claim 12, further comprising deploying the tuned machine learning model, wherein the tuned machine learning model is a distributed model and wherein a portion of the tuned machine learning model is deployed to a trusted network and wherein a portion of the tuned machine learning model is distributed to an untrusted network. Claim 14 The medium of claim 2, further comprising applying the stored obfuscation transform to a set of production data. Examiner notes that the RefDoc limitations anticipates the limitations of the current claim for the same reasons noted above. Claim 14: The medium of claim 2, further comprising: obtaining one or more records from a production data set; generating, with the stored obfuscation transform, obfuscated embeddings based on the production data set; transmitting the obfuscated embeddings corresponding to the production data set to the machine learning model by a network, and performing an inference on the obfuscated embeddings corresponding to the production data set by the machine learning model, wherein the production data set comprises sensitive data, and wherein the obfuscated embeddings corresponding to the production data set do not comprise sensitive data. Claim 20 A method comprising: obtaining, with a computer system, a machine learning model; obtaining, with the computer system, a training data set; training, with the computer system, an obfuscation transform based on the machine learning model and the training data set; and storing, with the computer system, the obfuscation transform in memory. Claim 20 A method comprising: obtaining, with a computer system, a access to machine learning model, wherein the machine learning model is a trained machine learning model, wherein the machine learning model is a foundation model, and wherein the machine learning model generates embeddings based on input records; obtaining, with the computer system, a training data set, the training set containing sensitive data; determining, with the computer system, an obfuscation transform based on the machine learning model and the training data set, wherein determining the obfuscation transform comprises: obtaining training embeddings based on the training set from the machine learning model; generating the obfuscation transform based on dimensionality of the training embeddings, wherein the obfuscation transform injects noise by, for each of at least a plurality of dimensions of the training embedding, sampling from a parametric noise distribution and applying the sampled parametric noise to a given embedding; training the obfuscation transform by adjusting parameters of the parametric noise distribution according to an objective function that is differentiable, wherein the object function comprises a measure of mutual information and a measure of data loss; and storing, with the computer system, the obfuscation transform in memory. Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of in light of US Non-provisional application number 18170476. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims in the patent anticipate the boarder limitation in the instant application. See table below. US Non-provisional application number 18170476., hereinafter ‘RefDoc2’ teaches the limitations as highlighted in the table and analysis below: U.S. Application No. 18532767 Examiner notes: US Non-provisional application number 18170476 (hereinafter ‘RefDoc2’) Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss; adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder; adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. Examiner notes that the RefDoc2 limitations anticipates the limitations in the instant case as the instant claim limitations are boarder in scope from the RefDoc2, as noted in the comparison. The claims in the instant case are anticipated by the more narrow claim limitations in the RefDoc2. Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a dataset training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of the dataset based on an input of the dataset, wherein the autoencoder comprises deterministic layer and wherein training is based on minimization of reconstruction loss; adding one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. Claim 2 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; and storing, with the computer system, the trained obfuscation transform in memory. Examiner notes that the RefDoc2 limitations anticipates the limitations in the instant case as the instant claim limitations are boarder in scope from the RefDoc2, as noted in the comparison. The claims in the instant case are anticipated by the more narrow claim limitations in the RefDoc2. The training disclosed in the RefDoc2: is considered a self-supervised learning/training process that uses the obtained learning model and training set that anticipates the broader recitation in the instant case. Claim 1 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: obtaining, by a computer system, a dataset training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of the dataset based on an input of the dataset, wherein the autoencoder comprises deterministic layer and wherein training is based on minimization of reconstruction loss; adding one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. Claim 2: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. training, by the computer system, an obfuscation transform …; ((Abstract idea: considered elements directed to organizing information and manipulating information through mathematical correlations; Mathematical relationships; (see MPEP § 2106.04(a)(2), subsection I). Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the machine learning model is a generative artificial intelligence (Al) model trained with self-supervision, (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).) and the trained obfuscation transform is configured to transform records (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 3: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. (Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III; Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I)) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the machine learning model is a generative artificial intelligence (Al) model trained with self-supervision, (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).) and the trained obfuscation transform is configured to transform records (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 4: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. (Considered directed to a Mental Process: Making evaluations and judgements of observations for formulating observations, evaluations and judgements as claimed; see MPEP § 2106.04(a)(2), subsection III) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 5: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. wherein training the obfuscation transform comprises: adding an obfuscation transform to at least one of the training data set and the machine learning model; and adjusting parameters of the obfuscation transform according to an objective function that is differentiable. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I)) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 6: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I)) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the obfuscation transform comprises a stochastic noise layer (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. See 2106.05(h).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 7: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. Abstract idea from claim 6 Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the stochastic noise layer is applied to input into the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 8: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. Abstract idea from claim 6 Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the stochastic noise layer is applied to input into a layer of the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 9: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. Abstract idea from claim 8 Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the stochastic noise layer is applied to embedded values within the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 10: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the trained obfuscation transform is configured to obfuscate data... (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 11: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I)) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). wherein the machine learning model is an ensemble model; the machine learning model comprises an image-based model, language-based model, or tabular-data-based model; the machine learning model is at least one of an inference model, a classification model, a prediction model, or a transformer; (Deemed insufficient to transform the judicial exception to a patentable invention because the claimed elements generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h).) the obfuscation transform is applied to at least a portion of the ensemble model; and the obfuscation transform is trained by optimization of an objective function, (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 14: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. further comprising applying the stored obfuscation transform to a set of production data. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); This limitation requires specific mathematical calculations by referring to the mathematical calculations by name, i.e., obfuscation transform, and therefore recites a judicial exception, namely an abstract idea.; August 2025 USPTO Memorandum) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 15: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. wherein the stored obfuscation transform is applied to the set of production data to generate obfuscated data (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); This limitation requires specific mathematical calculations by referring to the mathematical calculations by name, i.e., obfuscation transform, and therefore recites a judicial exception, namely an abstract idea.; August 2025 USPTO Memorandum) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). and wherein the obfuscated data is input into the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) Alternatively: and wherein the obfuscated data is input into the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 16: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. wherein the stored obfuscation transform is applied to the set of production data before the set of production data is transmitted to the machine learning model. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); This limitation requires specific mathematical calculations by referring to the mathematical calculations by name, i.e., obfuscation transform, and therefore recites a judicial exception, namely an abstract idea.; August 2025 USPTO Memorandum) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). applied to the set of production data before the set of production data is transmitted to the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) the set of production data is transmitted to the machine learning model (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 17: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. wherein the stored obfuscation transform is applied to the set of production data after the production data is transmitted to the machine learning model. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); This limitation requires specific mathematical calculations by referring to the mathematical calculations by name, i.e., obfuscation transform, and therefore recites a judicial exception, namely an abstract idea.; August 2025 USPTO Memorandum) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). applied to the set of production data after the production data is transmitted to the machine learning model. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) the production data is transmitted to the machine learning model (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 18: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. further comprising steps for deploying the obfuscation transform to a production dataset. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); This limitation requires specific mathematical calculations by referring to the mathematical calculations by name, i.e., obfuscation transform, and therefore recites a judicial exception, namely an abstract idea.; August 2025 USPTO Memorandum) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 19: Does claim fall within a statutory category? Yes. Step 2A Prong 1: Evaluate whether the claim recites a judicial exception. further comprising steps for obfuscating a data set based on the obfuscation transform. (Considered directed to a Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); This limitation requires specific mathematical calculations by referring to the mathematical calculations by name, i.e., obfuscation transform, and therefore recites a judicial exception, namely an abstract idea.; August 2025 USPTO Memorandum) Step 2A Prong 2: Evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception The preamble is deemed insufficient to transform the judicial exception to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). From claim 2: obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Receiving or transmitting data over a network) From claim 2: training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation merely include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform an abstract idea; Thus claim limitations amount to mere instructions to apply the judicial exception using a computer/computing environment as a tool, as discussed in MPEP § 2106.05(f).) From claim 2: and storing, with the computer system, the trained obfuscation transform in memory. (Deemed insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity, e.g. Storing and retrieving information in memory) The additional elements do not appear to be sufficient to transform the judicial exception into a practical application at Step 2A as analyzed above. Step 2B: Evaluates whether the claim as a whole/in combination integrates the recited judicial exception into a practical application of the exception The claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception and fail to integrate the abstract into practical application. First, additional limitations are directed to elements that generally link the use of a judicial exception to a particular technological environment or field of use and merely invoke the use of computer technology as a tool for applying the judicial exception. Secondly, the limitations directed to insufficient to transform the judicial exception to a patentable invention because the recitation is directed to insignificant solution activity for as noted above. The courts have deemed these types of activity as well-known routine and convectional, see evidences noted below: Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; These types of claimed elements cannot transform the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Therefore, claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed a judicial exception and does not recite, when claim elements are examined individually and as a whole, elements that the courts have identified as "significantly more”. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 2-5 and 12-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Weggenmann et al. (US Pub. No. 2023/0185962, hereinafter ‘Weg’). Regarding independent claim 2 limitations, Weg teaches: tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: (in 0064-0069: … A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communica­tion network… The instructions 524 may also reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500, the main memory 504 and the processor 502 also constituting machine-readable media…. The term "machine-readable medium" shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the meth­odologies of the present embodiments [claimed a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising], or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions….) obtaining, by a computer system, a machine learning model; obtaining, by the computer system, a training data set;; (as depicted in Fig. 2 and in 0018-0019: FIG. 2 is a block diagram illustrating an example differential privacy system 200… In training the variational autoencoder [claimed obtaining, by a computer system, a machine learning model], the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205 [claimed obtaining, by a computer system, a dataset], ) training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision; (in 0019: In some example embodiments, the differential privacy system 200 comprises a variational autoencoder 210. The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder [claimed training, by the computer system, an obfuscation transform based on the machine learning model], the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210. The encoder 212 and decoder 216 may be trained jointly such that output data 225 [claimed training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision] of the variational autoen­coder 210 minimizes a reconstruction error (e.g., minimiz­ing the differences between the original input values and the reconstructed output values) [claimed the training data set by self-supervision]; And the tried model is considered training obfuscate transform as the latent space trained with the model training, in 0011: … some techni­cal effects of the system and method of the present disclo­sure are to provide a computer system that is specially-con­figured to implement a differentially private variational autoencoder for data obfuscation [claimed training, by the computer system, an obfuscation transform based on the machine learning model and the training data set by self-supervision]. The computer system may encode input data into a latent space representation of the input data. The latent space representation may comprise a mean and a standard deviation, and the encoding of the input data may comprise bounding the mean within a finite space and using a global value for the standard deviation, where the global value is independent of the input data. Next, the computer system may obfuscate the latent space representation by applying a noise scaling parameter to the standard deviation of the latent space representation. Rather than being the same fixed value for every use case, the noise scaling parameter may be adapted according to the particu­lar privacy requirements of the given situation. The compu­ter system may then sample data from a probability distribu­tion that is based on the mean and the standard deviation of the obfuscated latent space representation. Finally, the com­puter system may decode the sampled data into output data.; And self-supervision as training with unlabeled data (e.g.. unsupervised learning), in 0020: In some example embodiments, the differential privacy system 200 uses unsupervised training [claimed the training data set by self-supervision] to train the variational autoencoder 210, such that the encoder 212 maps the input data 205, x, from a feature space F into some latent space L, and the decoder 216 approximately inverts the encoder 212, thereby transforming latent space representa­tions 214 of the input data 205 back to feature space data. The output of the encoder 212 may be probabilistically given by the mean µ(x) and the variance σ(x), and the input of the decoder 216 may be sampled from the isotropic multivariate Gaussian distribution N(µ(x),σ2(x)).) storing, with the computer system, the trained obfuscation transform in memory. (in in 0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running it in inference mode [claimed storing, with the computer system, the trained obfuscation transform in memory] with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution....; Examiner notes running trained machine learning model with noise scaling parameter as claimed stored element (e.g. the trained obfuscation transform) used by the computer memory for executing them (i.e. running the trained model as disclosed)) Regarding claim 3, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, wherein the machine learning model is a generative artificial intelligence (Al) model trained with self-supervision, and the trained obfuscation transform is configured to transform records into obfuscated records that are correctly processed by the machine learning model despite the obfuscation. (in in 0019: In some example embodiments, the differential privacy system 200 comprises a variational autoencoder 210. The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder [claimed wherein the machine learning model is a generative artificial intelligence (Al) model trained with self-supervision], the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210. The encoder 212 and decoder 216 may be trained jointly such that output data 225 [and the trained obfuscation transform is configured to transform records into obfuscated records that are correctly processed by the machine learning model despite the obfuscation] of the variational autoen­coder 210 minimizes a reconstruction error (e.g., minimiz­ing the differences between the original input values and the reconstructed output values); And the tried model is considered training obfuscate transform as the latent space trained with the model training, in 0011: … some techni­cal effects of the system and method of the present disclo­sure are to provide a computer system that is specially-con­figured to implement a differentially private variational autoencoder for data obfuscation [claimed wherein the machine learning model is a generative artificial intelligence (Al) model trained with self-supervision, and the trained obfuscation transform is configured to transform records into obfuscated records that are correctly processed by the machine learning model despite the obfuscation]. The computer system may encode input data into a latent space representation of the input data. The latent space representation may comprise a mean and a standard deviation, and the encoding of the input data may comprise bounding the mean within a finite space and using a global value for the standard deviation, where the global value is independent of the input data. Next, the computer system may obfuscate the latent space representation by applying a noise scaling parameter to the standard deviation of the latent space representation. Rather than being the same fixed value for every use case, the noise scaling parameter may be adapted according to the particu­lar privacy requirements of the given situation. The compu­ter system may then sample data from a probability distribu­tion that is based on the mean and the standard deviation of the obfuscated latent space representation. Finally, the com­puter system may decode the sampled data into output data.) Regarding claim 4, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, wherein the machine learning model is a foundation model, where the foundation model is operative to perform a plurality of tasks at inference time with capabilities that emerged during training and were not explicitly measured by an objective function used to train the foundation model. (in 0018-0020: FIG. 2 is a block diagram illustrating an example differential privacy system 200. In some example embodi­ments, the differential privacy system 200 is configured to implement a differential privacy algorithm that shares infor­mation about a dataset by describing the patterns of groups within the dataset while withholding information about enti­ties (e.g., people, organizations) in the dataset [claimed wherein the machine learning model is a foundation model, where the foundation model is operative to perform a plurality of tasks at inference time with capabilities that emerged during training and were not explicitly measured by an objective function used to train the foundation model]. For example, the differential privacy algorithm may analyze a dataset and compute statistics about the dataset, such as the data's mean and variance….In some example embodiments, the differential privacy system 200 comprises a variational autoencoder 210 [claimed wherein the machine learning model is a foundation model, where the foundation model is operative to perform a plurality of tasks at inference time with capabilities that emerged during training and were not explicitly measured by an objective function used to train the foundation model]. The variational autoencoder 210 may comprise an encoder 212 and a decoder 216…In some example embodiments, the differential privacy system 200 uses unsupervised training to train the variational autoencoder 210, such that the encoder 212 maps the input data 205, x, from a feature space F into some latent space L, and the decoder 216 approximately inverts the encoder 212, thereby transforming latent space representa­tions 214 of the input data 205 back to feature space data [claimed where the foundation model is operative to perform a plurality of tasks at inference time with capabilities that emerged during training and were not explicitly measured by an objective function used to train the foundation model]. The output of the encoder 212 may be probabilistically given by the mean µ(x) and the variance cr(x), and the input of the decoder 216 may be sampled from the isotropic multivariate Gaussian distribution N(µ(x),cr2(x)).) Regarding claim 5, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, wherein training the obfuscation transform comprises: adding an obfuscation transform to at least one of the training data set and the machine learning model; and ((in 0040: … For example, the noise may comprise a value that is randomly selected from a Gaussian distribution [claimed wherein training the obfuscation transform comprises: adding an obfuscation transform to at least one of the training data set and the machine learning model; Examiner notes claimed layer for performing adding random noise as claimed obfuscation]. Other types of noise are also within the scope of the present disclosure. In some example embodiments, the encoding of the input data 205 comprises inferring latent space parameters of a latent space distribution based on the input data 205, where the latent space parameters comprise a mean and a standard deviation…; And in 0021: In order to get an obfuscation mechanism [claimed wherein training the obfuscation transform comprises: adding an obfuscation transform to at least one of the training data set and the machine learning model] from the approach discussed above, the differential privacy system 200 may amplify the sampling of Gaussian noise with stan­dard deviation cr by multiplying it with a noise scaling para­meter 215, K 2: 1, to get an effective variance of the sampling of K·cr, thereby adding some extra amount of uncertainty (noise) to its output in a controlled fashion such that a higher value of K leads to data that is less similar to the correspond­ing input data 205…) adjusting parameters of the obfuscation transform according to an objective function that is differentiable. (in 0040: … The differential privacy sys­tem 200 may obfuscate the latent space representation 214 by applying noise to the latent space representation 214. In some example embodiments, the noise comprises Gaussian noise. For example, the noise may comprise a value that is randomly selected from a Gaussian distribution. Other types of noise are also within the scope of the present disclosure. In some example embodiments, the encoding of the input data 205 comprises inferring latent space parameters of a latent space distribution based on the input data 205, where the latent space parameters comprise a mean and a standard deviation. The inferring of the latent space para­meters may comprise bounding the mean within a finite space and using a global value for the standard deviation [claimed adjusting parameters of the obfuscation transform according to an objective function that is differentiable], where the global value is independent of the input data 205…; And in 0019: … . The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder, the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated [claimed according to an objective function that is differentiable] through the variational enco­der 210…; And the use of a differentiable function in 0026-0028: In some example embodiments, the bounding of the mean µ = µ(x) and the global standard deviation cr are achieved by modifications to the encoder 212 [claimed [claimed adjusting parameters of the obfuscation transform according to an objective function that is differentiable]… The differential privacy system 200 may translate this value to the actual Renyi or differential privacy guarantees as expressed by the parameters (a,e) or (e,8), respectively…. Since (e,8) differential privacy guarantees may be derived from Renyi differential privacy (RDP), the differen­tial privacy system 200 may use RDP to optimize the priv­acy parameters, such as by using the following steps:…) Regarding claim 12, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, further comprising tuning the machine learning model based on the training data set. (training as claimed tuning process in (in 0019: In some example embodiments, the differential privacy system 200 comprises a variational autoencoder 210. The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder, the encoder 212 may encode input data 205 [claimed further comprising tuning the machine learning model based on the training data set] as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210. The encoder 212 and decoder 216 may be trained jointly such that output data 225 [claimed further comprising tuning the machine learning model based on the training data set] of the variational autoen­coder 210 minimizes a reconstruction error (e.g., minimiz­ing the differences between the original input values and the reconstructed output values).) Regarding claim 13, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, further comprising deploying the tuned machine learning model. (in 0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running [claimed further comprising deploying the tuned machine learning model] it in inference mode with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution....) Regarding claim 14, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, further comprising applying the stored obfuscation transform to a set of production data. (in 0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running [claimed further comprising applying the stored obfuscation transform to a set of production data] it in inference mode with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution.. Feeding original data as input data 205 into the trained variational autoencoder 210, the trained variational autoencoder 210 may translate each input data 205 [claimed further comprising applying the stored obfuscation transform to a set of production data] into an obfuscated output data 225 that may subsequently be used as a surrogate for the original data..; Examiner notes that running model that is stored needs data to run and make inferences from senor data in 0023.) Regarding claim 15, the rejection of claim 14 is incorporated and Weg further teaches the medium of claim 14, wherein the stored obfuscation transform is applied to the set of production data to generate obfuscated data and wherein the obfuscated data is input into the machine learning model. (in0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running [claimed further comprising applying the stored obfuscation transform to a set of production data] it in inference mode with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution.. Feeding original data as input data 205 into the trained variational autoencoder 210, the trained variational autoencoder 210 may translate each input data 205 [claimed wherein the stored obfuscation transform is applied to the set of production data to generate obfuscated data] into an obfuscated output data 225 that may subsequently be used as a surrogate for the original data [claimed and wherein the obfuscated data is input into the machine learning model] ) Regarding claim 16, the rejection of claim 15 is incorporated and Weg further teaches the medium of claim 15, wherein the stored obfuscation transform is applied to the set of production data before the set of production data is transmitted to the machine learning model. (input to a model is considered claimed transition for stored transform captured using noise distribution, in 0011: …. The computer system may encode input data into a latent space representation of the input data. The latent space representation may comprise a mean and a standard deviation, and the encoding of the input data may comprise bounding the mean within a finite space and using a global value for the standard deviation, where the global value is independent of the input data. Next, the computer system may obfuscate the latent space representation [claimed wherein the stored obfuscation transform is applied to the set of production data before the set of production data is transmitted to the machine learning model] by applying a noise scaling parameter to the standard deviation of the latent space representation. Rather than being the same fixed value for every use case, the noise scaling parameter may be adapted according to the particu­lar privacy requirements of the given situation. The compu­ter system may then sample data from a probability distribu­tion that is based on the mean and the standard deviation of the obfuscated latent space representation [claimed wherein the stored obfuscation transform is applied to the set of production data before the set of production data is transmitted to the machine learning model]. Finally, the com­puter system may decode the sampled data into output data…; Examiner notes claimed stored transform is applied before sample input data (e.g. claimed production data) applied to the decoder ) Regarding claim 17, the rejection of claim 15 is incorporated and Weg further teaches the medium of claim 15, wherein the stored obfuscation transform is applied to the set of production data after the production data is transmitted to the machine learning model. (in 0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running [claimed wherein the stored obfuscation transform is applied to the set of production data after the production data is transmitted to the machine learning model] it in inference mode with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution.. Feeding original data as input data 205 into the trained variational autoencoder 210, the trained variational autoencoder 210 [claimed wherein the stored obfuscation transform is applied to the set of production data after] may translate each input data 205 [claimed wherein the stored obfuscation transform is applied to the set of production data after the production data is transmitted to the machine learning model, after the training is completed] into an obfuscated output data 225 that may subsequently be used as a surrogate for the original data [claimed …the production data is transmitted to the machine learning model] ) Regarding claim 18, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, further comprising steps for deploying the obfuscation transform to a production dataset. (in input to a model is considered claimed transition for stored transform captured using noise distribution, in 0011: …. The computer system may encode input data into a latent space representation of the input data. The latent space representation may comprise a mean and a standard deviation, and the encoding of the input data may comprise bounding the mean within a finite space and using a global value for the standard deviation, where the global value is independent of the input data. Next, the computer system may obfuscate the latent space representation [claimed steps for deploying the obfuscation transform to a production dataset.] by applying a noise scaling parameter to the standard deviation of the latent space representation. Rather than being the same fixed value for every use case, the noise scaling parameter may be adapted according to the particu­lar privacy requirements of the given situation. The compu­ter system may then sample data from a probability distribu­tion that is based on the mean and the standard deviation of the obfuscated latent space representation [claimed steps for deploying the obfuscation transform to a production dataset.]. Finally, the com­puter system may decode the sampled data into output data…; Examiner notes claimed stored transform is applied before sample input data (e.g. claimed production data) applied to the decoder as claimed steps for deploying) Regarding claim 19, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, further comprising steps for obfuscating a data set based on the obfuscation transform. (in input to a model is considered claimed transition for stored transform captured using noise distribution, in 0011: …. The computer system may encode input data into a latent space representation of the input data. The latent space representation may comprise a mean and a standard deviation, and the encoding of the input data may comprise bounding the mean within a finite space and using a global value for the standard deviation, where the global value is independent of the input data. Next, the computer system may obfuscate the latent space representation [claimed further comprising steps for obfuscating a data set based on the obfuscation transform.] by applying a noise scaling parameter to the standard deviation of the latent space representation. Rather than being the same fixed value for every use case, the noise scaling parameter may be adapted according to the particu­lar privacy requirements of the given situation. The compu­ter system may then sample data from a probability distribu­tion that is based on the mean and the standard deviation of the obfuscated latent space representation [claimed further comprising steps for obfuscating a data set based on the obfuscation transform..]. Finally, the com­puter system may decode the sampled data into output data…; And in0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running [claimed wherein the stored obfuscation transform is applied to the set of production data after the production data is transmitted to the machine learning model] it in inference mode with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution.. Feeding original data as input data 205 into the trained variational autoencoder 210, the trained variational autoencoder 210 [claimed comprising steps for obfuscating a data set based on the obfuscation transform] may translate each input data 205 into an obfuscated output data 225 that may subsequently be used as a surrogate for the original data.) Regarding independent claim 20 limitations, Weg teaches: method comprising: obtaining, with a computer system, a machine learning model; obtaining, with the computer system, a training data set; (as depicted in Fig. 2 and in 0018-0019: FIG. 2 is a block diagram illustrating an example differential privacy system 200… In training the variational autoencoder [claimed obtaining, with a computer system, a machine learning model;], the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205 [claimed obtaining, with the computer system, a training data set]) training, with the computer system, an obfuscation transform based on the machine learning model and the training data set; and (in 0019: In some example embodiments, the differential privacy system 200 comprises a variational autoencoder 210. The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder [claimed training, with the computer system, an obfuscation transform based on the machine learning model and the training data set], the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210. The encoder 212 and decoder 216 may be trained jointly such that output data 225 [claimed training, with the computer system, an obfuscation transform based on the machine learning model and the training data set] of the variational autoen­coder 210 minimizes a reconstruction error (e.g., minimiz­ing the differences between the original input values and the reconstructed output values)) storing, with the computer system, the trained obfuscation transform in memory. (in in 0023: After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running it in inference mode [claimed storing, with the computer system, the trained obfuscation transform in memory] with a noise scaling parameter 215, K 2: 1, resulting in an effective standard deviation K· cr that is used for the latent distribution....; Examiner notes running trained machine learning model with noise scaling parameter as claimed stored element (e.g. the trained obfuscation transform) used by the computer memory for executing them (i.e. running the trained model as disclosed)) Examiner notes claimed computer system for performing recited method is disclosed by Weg in 0064-0069: … A computer program can be deployed to be executed on one computer or on multiple computers [claimed computer system ] at one site or distributed across multiple sites and interconnected by a communica­tion network… The instructions 524 may also reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500, the main memory 504 and the processor 502 also constituting machine-readable media…. The term "machine-readable medium" shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the meth­odologies of the present embodiments [claimed computer system], or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions….) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Weggenmann et al. (US 20230185962, hereinafter ‘Weg’) in view of Chaudhury et al. (US 20190318040, hereinafter ‘Chau). Regarding independent claim 1, Weg teaches a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: (in 0064-0069: … A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communica­tion network… The instructions 524 may also reside, completely or at least partially, within the main memory 504 and/or within the processor 502 during execution thereof by the computer system 500, the main memory 504 and the processor 502 also constituting machine-readable media…. The term "machine-readable medium" shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the meth­odologies of the present embodiments [claimed a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising], or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions….) obtaining, by a computer system, a dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss; ([0019] In some example embodiments, the differential privacy system 200 comprises a variational autoencoder 210. The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder [training, with the computer system, one or more machine learning models as an autoencoder], the encoder 212 may encode input data 205 [a dataset] as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a reconstruction error may be computed [wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss], and the reconstruction error may be backpropagated [based on optimization of a value indicative of reconstruction loss] through the variational encoder 210. The encoder 212 and decoder 216 may be trained jointly such that output data 225 of the variational autoencoder 210 minimizes a reconstruction error (e.g., minimizing the differences between the original input values and the reconstructed output values) [training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss].) adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder; (in [0021] In order to get an obfuscation mechanism from the approach discussed above, the differential privacy system 200 may amplify the sampling of Gaussian noise with standard deviation σ by multiplying it with a noise scaling parameter 215 [adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder], κ ≥ 1, to get an effective variance of the sampling of κ.Math.σ, thereby adding some extra amount of uncertainty (noise) to its output in a controlled fashion such that a higher value of κ leads to data that is less similar to the corresponding input data 205.) adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; (in [0021] In order to get an obfuscation mechanism from the approach discussed above, the differential privacy system 200 may amplify the sampling of Gaussian noise with standard deviation σ by multiplying it with a noise scaling parameter 215, κ ≥ 1, [adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable] to get an effective variance of the sampling of κ.Math.σ, thereby adding some extra amount of uncertainty (noise) to its output in a controlled fashion such that a higher value of κ leads to data that is less similar to the corresponding input data 205 [adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable]. In some example embodiments, the differential privacy system 200 may analyze the privacy guarantees in terms of Rényi differential privacy (MIRONOV, Ilya, Rényi Differential Privacy; 2017 IEEE 30th Computer Security Foundations Symposium (CSF). IEEE, 2017. S. 263-275)… and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. (in [0023] After having trained the model of the variational autoencoder 210, the differential privacy system 200 may use it to obfuscate data with the desired privacy guarantees as derived above by running it in inference mode [and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory stored to make available in interface mode] with a noise scaling parameter 215, κ ≥ 1, resulting in an effective standard deviation κ.Math.σ that is used for the latent distribution. Feeding original data as input data 205 into the trained variational autoencoder 210, the trained variational autoencoder 210 may translate each input data 205 into an obfuscated output data 225 that may subsequently be used as a surrogate for the original data.) While Weg teaches the process for modifying the parameters of a noise layer as Gaussian noise to a trained autoencoder model. Additionally, Chu teaches adding the Gaussian noise layers to pretrained components of the autoencoder, in [0068] The architecture 700 further includes modality 2 data y.sup.(i) 731, a pre-trained audio or text encoder r.sub.y 732, encoded data y.sub.z.sup.(i) 733, the model mapping embedding to the mean of the Gaussian latent distribution f.sub.y.sup.α 740, the mean of the Gaussian latent distribution μ.sub.l.sub.y 751, added Gaussian noise l.sub.y˜N(μ.sub.l.sub.y, σ.sub.user) 752 [adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder], the shared latent space l.sub.y 753, the decoder back to the modality 2 embedding f.sub.y′.sup.β 760, the reconstructed audio or text embedding {tilde over (y)}.sub.z.sup.(i) 771, a pre-trained audio or text decoder g.sub.y 772, and reconstructed modality 2 data {tilde over (Y)}.sup.(i) 773. Weg and Chu are analogous art because both involve developing information processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for processing information using a noise layers using variational mapping between embedding spaces as disclosed by Chu with the method of developing information processing with random noise parameters with a neural network learning model as disclosed by Weg. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Chu and Weg above. Doing so allows cross modal data generation from separate single data modalities, (Chu, [0085]). Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over LaTerza et al. (US 12105837 hereinafter ‘LaTe’) in view of Chaudhury et al. (US 20190318040, hereinafter ‘Chau). Regarding independent claim 1, LaTe teaches a tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations comprising: (in1 1:40-49: ) In one general aspect, the instant disclosure describes a data processing system having a processor and a memory in communication with the processor wherein the memory stores executable instructions that, when executed by the processor, cause the data processing system to perform multiple functions.; And in 15:61-16 The memory/storage 630 may include a main memory 632, a static memory 634, or other memory, and a storage unit 636, both accessible to the processors 610 such as via the bus 602. The storage unit 636 and memory 632, 634 store instructions 616 embodying any one or more of the functions described herein… The term “machine-readable medium,” as used herein, does not encompass transitory electrical or electromagnetic signals per se (such as on a carrier wave propagating through a medium); the term “machine-readable medium” may therefore be considered tangible and non-transitory…) obtaining, by a computer system, a dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss; (in 7:17-29: ) FIG. 2 is a diagram depicting elements involved in training a privacy preserving data generation model [obtaining, by a computer system, a dataset; training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset]. In some implementations, the training mechanism 114 used for training the privacy preserving data generation model makes use of a pretrained language model 240. The pretrained language model 240 may be a language model that generates text. For example, the pretrained language model 240 may be a generative model. In an example, the pretrained language model 240 is a GPT-2 model. However, many other types of generative language models may be used. In other implementations, the training mechanism 114 may train the privacy preserving data generation model without the use of a pretrained model...; And in 5:4-11: The privacy preserving data generation model 112 may be an ML model trained [training, with the computer system, one or more machine learning models as an autoencoder to generate as output a reconstruction of a record in the dataset based on an input of the record in the dataset, wherein the autoencoder comprises a deterministic layer and wherein training is based on optimization of a value indicative of reconstruction loss] for generating private synthetic training data from non-private true training data. The privacy preserving data generation model 112 may be trained by the training mechanism 114 [obtaining, by a computer system, a dataset]. The training mechanism 114 may use true training data sets stored in a data store 132 of the storage server 130 to provide initial and/or ongoing training [obtaining, by a computer system, a dataset] for each of the models…) adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder; (in 7:30-39: The pretrained language model 240 may be a model that can incorporate privacy parameters. For example, the pretrained language model 240 may be a model that can receive differential privacy parameters such as privacy parameters 220 as some of its input and generate an output that is likely to ensure privacy of the output data. As is known in the art, differential privacy provides a mathematical assurance of privacy protection by introducing a level of noise in the input data [adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder], such that private input data will not be traceable to its origins… And in 10:40-63: The initial prompt generation unit 350 may be configured to create the initial prompt by utilizing one or more histograms. For example, to generate prompts of K number of tokens (e.g., number of sample words) using true data with a trained privacy preserving data generation model that is (ϵ_s)-Differentially private, a ϵ_s-Differentially private histogram of starting tokens of length K of true data may be created. This means a histogram may be created based on K number of starting words of each data entry, where the histogram ensures that the words that are selected are ϵ_s-Differentially private. To achieve this, an initial histogram that is not differentially private may first be created, before noise is added to the histogram to make it privacy preserving. In an example, the noisy histogram is created by adding Laplacian (1/ϵ_s) noise [adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder] to the histogram. The K-length prompts may then be sampled from the histogram to generate the initial prompt…) adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable; (in 10:40-63: The initial prompt generation unit 350 may be configured to create the initial prompt by utilizing one or more histograms. For example, to generate prompts of K number of tokens (e.g., number of sample words) using true data with a trained privacy preserving data generation model that is (ϵ_s)-Differentially private [adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable], a ϵ_s-Differentially private histogram of starting tokens of length K of true data may be created. This means a histogram may be created based on K number of starting words of each data entry, where the histogram ensures that the words that are selected are ϵ_s-Differentially private [adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable]. To achieve this, an initial histogram that is not differentially private may first be created, before noise is added to the histogram to make it privacy preserving. In an example, the noisy histogram is created by adding Laplacian (1/ϵ_s) noise to the histogram [adjusting, with the computer system, parameters of the stochastic noise layers according to an objective function that is differentiable]. The K-length [adjusting, with the computer system, parameters…] prompts may then be sampled from the histogram to generate the initial prompt…… and storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory. (in 10:53-58: In an example, the noisy histogram is created by adding Laplacian (1/ϵ_s) noise to the histogram. The K-length prompts may then be sampled from the histogram to generate the initial prompt. In this manner, data is sampled from the true training data 310 [storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory] in a privacy persevering manner… And in 11:18-31: Once all of the input parameters are provided to the privacy preserving data generation model 112, the privacy preserving data generation model 112 processes and analyzes the inputs [storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory stored and used to generate output] to generate the synthetic training data 360 as an output. The synthetic training data 360 may include labeled training data that includes a domain type as a parameter associated with the training data. The synthetic training data 360 may include one or more sets of training data that are privacy compliant, resemble the true training data 310 such that it can be used for training a classifier model and can include more data than the original true training data 310. As such, not only does the synthetic training data ensures privacy, it can also increase efficiency and reduce costs associated with training a classifier model. To ensure that the trained privacy preserving data generation model 112 provides the level of privacy required, a leakage analysis unit 370 may be utilized [storing, with the computer system, the one or more machine learning models of the autoencoder with the stochastic noise layers in memory stored and used to perform a leakage analysis] to check the level of private data included in the synthetic training data 360. This may involve analyzing the synthetic training data to determine the amount or percentage of private data still present in the synthetic training data. To achieve this, the leakage analysis unit 370 may receive the true training data 310, the synthetic training data 360 and the leakage threshold discussed above, as input…) Additionally, Chu teaches adding the Gaussian noise layers to pretrained components of the autoencoder, in [0068] The architecture 700 further includes modality 2 data y.sup.(i) 731, a pre-trained audio or text encoder r.sub.y 732, encoded data y.sub.z.sup.(i) 733, the model mapping embedding to the mean of the Gaussian latent distribution f.sub.y.sup.α 740, the mean of the Gaussian latent distribution μ.sub.l.sub.y 751, added Gaussian noise l.sub.y˜N(μ.sub.l.sub.y, σ.sub.user) 752 [adding, with the computer system, one or more stochastic noise layers to the trained one or more machine learning models of the autoencoder], the shared latent space l.sub.y 753, the decoder back to the modality 2 embedding f.sub.y′.sup.β 760, the reconstructed audio or text embedding {tilde over (y)}.sub.z.sup.(i) 771, a pre-trained audio or text decoder g.sub.y 772, and reconstructed modality 2 data {tilde over (Y)}.sup.(i) 773. LeTe and Chu are analogous art because both involve developing information processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for processing information using a noise layers using variational mapping between embedding spaces as disclosed by Chu with the method of developing information processing based on noise parameters with a neural network learning model as disclosed by LeTe. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Chu and LeTe above. Doing so allows cross modal data generation from separate single data modalities, (Chu, [0085]). Claims 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over Weggenmann et al. (US Pub. No. 2023/0185962, hereinafter ‘Weg’) in view of Lecuyer et al. (NPL: “Certified Robustness to Adversarial Examples with Differential Privacy”, hereinafter ‘Le’). Regarding claim 6, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, wherein the obfuscation transform comprises a stochastic noise layer and wherein training the obfuscation transform comprises determining parameters of distribution of stochastic noise of the stochastic noise layer. (in 0040: … The differential privacy sys­tem 200 may obfuscate the latent space representation 214 by applying noise to the latent space representation 214. In some example embodiments, the noise comprises Gaussian noise. For example, the noise may comprise a value that is randomly selected from a Gaussian distribution [claimed wherein the obfuscation transform comprises a stochastic noise layer and wherein training the obfuscation transform comprises determining parameters of distribution of stochastic noise of the stochastic noise layer; Examiner notes process for generating/selecting random noise as claimed stochastic noise layer]. Other types of noise are also within the scope of the present disclosure. In some example embodiments, the encoding of the input data 205 comprises inferring latent space parameters of a latent space distribution based on the input data 205, where the latent space parameters comprise a mean and a standard deviation. The inferring of the latent space para­meters may comprise bounding the mean within a finite space and using a global value for the standard deviation [claimed wherein the obfuscation transform comprises a stochastic noise layer and wherein training the obfuscation transform comprises determining parameters of distribution of stochastic noise of the stochastic noise layer], where the global value is independent of the input data 205…) While Weg teaches the use of a noise layer having parameters of a Gaussian distribution that can be sampled randomly to add random noise to the neural network machine learning computations. Weg does not expressly teach the use of the noise distribution parameters as part of the noise layer associated with a neural network machine learning model. Le teaches the use of the noise distribution parameters as part of the noise layer associated with a neural network machine learning model, as depicted in Fig. 1: PNG media_image1.png 451 1189 media_image1.png Greyscale And in pg. 657: … A PixelDP DNN includes in its architecture a DP noise layer that randomizes the network’s computation, to enforce DP bounds on how much the distribution over its predictions can change with small, norm-bounded changes in the input… Weg and Le are analogous art because both involve developing information processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for processing information using a noise layer for process distributed noise parameters for processing information using a deep neural network as disclosed by Le with the method of developing information processing with random noise parameters with a neural network learning model as disclosed by Weg. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Le and Weg to utilized a noise layer in a deep neural network machine learning model to randomizes the network’s computation, (Le, Pg. 657: Left Col.); Doing so allows the learning model to enforce bounds on how much the distribution over its predictions can change with small, norm-bounded changes in the input, (Le, Pg. 657: Left Col.). Regarding claim 7, the rejection of claim 6 is incorporated and Weg in combination with Le further teaches the medium of claim 6, wherein the stochastic noise layer is applied to input into the machine learning model. (in 0040: … The differential privacy sys­tem 200 may obfuscate the latent space representation 214 by applying noise to the latent space representation 214. In some example embodiments, the noise comprises Gaussian noise. For example, the noise may comprise a value that is randomly selected from a Gaussian distribution [claimed wherein the stochastic noise layer is applied to input into the machine learning model; Examiner notes process for generating/selecting random noise as claimed stochastic noise layer]. Other types of noise are also within the scope of the present disclosure. In some example embodiments, the encoding of the input data 205 comprises inferring latent space parameters of a latent space distribution based on the input data 205 [claimed wherein the stochastic noise layer is applied to input into the machine learning model], where the latent space parameters comprise a mean and a standard deviation. The inferring of the latent space para­meters may comprise bounding the mean within a finite space and using a global value for the standard deviation, where the global value is independent of the input data 205…) Additionally Le teaches the applying the noise layer to the input into the machine learning model as depicted in Fig. 1: PNG media_image1.png 451 1189 media_image1.png Greyscale And in pg. 657: … A PixelDP DNN includes in its architecture a DP noise layer that randomizes the network’s computation, to enforce DP bounds on how much the distribution over its predictions can change with small, norm-bounded changes in the input… It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Weg and Le for the same reasons disclosed above. Regarding claim 8, the rejection of claim 6 is incorporated and Weg in combination with Le further teaches the medium of claim 6, wherein the stochastic noise layer is applied to input into a layer of the machine learning model. (in 0019: … . The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder, the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded [claimed wherein the stochastic noise layer is applied to input into a layer of the machine learning model] by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210…; And in 0026-0028: In some example embodiments, the bounding of the mean µ = µ(x) and the global standard deviation σ are achieved by modifications to the encoder 212 [claimed wherein the stochastic noise layer is applied to input into a layer of the machine learning model, as the decoder input layer]… The differential privacy system 200 may translate this value to the actual Renyi or differential privacy guarantees as expressed by the parameters (a,e) or (e,8), respectively…. Since (e,8) differential privacy guarantees may be derived from Renyi differential privacy (RDP), the differen­tial privacy system 200 may use RDP to optimize the priv­acy parameters, such as by using the following steps: ) Additionally Le teaches the applying the noise layer applied to input into a layer of the machine learning model, as layer as depicted in Fig. 1: PNG media_image1.png 451 1189 media_image1.png Greyscale And in pg. 657: … A PixelDP DNN includes in its architecture a DP noise layer that randomizes the network’s computation, to enforce DP bounds on how much the distribution over its predictions can change with small, norm-bounded changes in the input… It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Weg and Le for the same reasons disclosed above. Regarding claim 9, the rejection of claim 8 is incorporated and Weg in combination with Le further teaches the medium of claim 8, wherein the stochastic noise layer is applied to embedded values within the machine learning model. (in 0019: … . The variational autoencoder 210 may comprise an encoder 212 and a decoder 216. In training the variational autoencoder, the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 [claimed wherein the stochastic noise layer is applied to embedded values within the machine learning model] may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210…; And in 0026-0028: In some example embodiments, the bounding of the mean µ = µ(x) and the global standard deviation σ are achieved by modifications to the encoder 212 [claimed wherein the stochastic noise layer is applied to embedded values within the machine learning model]… The differential privacy system 200 may translate this value to the actual Renyi or differential privacy guarantees as expressed by the parameters (a,e) or (e,8), respectively…. Since (e,8) differential privacy guarantees may be derived from Renyi differential privacy (RDP), the differen­tial privacy system 200 may use RDP to optimize the priv­acy parameters, such as by using the following steps: ) Additionally Le teaches the applying the noise layer applied to input into a layer of the machine learning model, as layer as depicted in Fig. 1: PNG media_image1.png 451 1189 media_image1.png Greyscale And in pg. 657: … A PixelDP DNN includes in its architecture a DP noise layer that randomizes the network’s computation, to enforce DP bounds on how much the distribution over its predictions can change with small, norm-bounded changes in the input [as the embedded input data from the layer 1 of the input bounded by the noise layer as claimed wherein the stochastic noise layer is applied to embedded values within the machine learning model]… It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Weg and Le for the same reasons disclosed above. Regarding claim 10, the rejection of claim 6 is incorporated and Weg in combination with Le further teaches the medium of claim 6, wherein the trained obfuscation transform is configured to obfuscate data designated as being sensitive. (data related to privacy as sensitive data claimed, in 0009-0011: … However, this processing of data involves risks to data protection. While sensor data itself does not contain immediate perso­nal identifiers (e.g., usernames, IDs, email addresses, or phone numbers) that are subject to strict data protection reg­ulations, it still is highly privacy-sensitive [claimed wherein the trained obfuscation transform is configured to obfuscate data designated as being sensitive]. Characteristic patterns, such as movements in motion sensor data, may reveal the identity of individuals… Raw data usually contains information that is not necessary for the task at hand, but that may be (mis)used to gather additional knowledge. Current data obfuscation tech­niques to reduce the amount of sensitive information [claimed wherein the trained obfuscation transform is configured to obfuscate data designated as being sensitive] that may be leaked when sharing sequential data in order to protect the privacy of individuals or entities to which the data belongs sacrifice either sacrifice privacy protection for data in order to maximize the utility of the data or sacrifice the utility of the data in order to maximize privacy protection for the data… By applying one or more of the solutions disclosed herein, some techni­cal effects of the system and method of the present disclo­sure are to provide a computer system that is specially-con­figured to implement a differentially private variational autoencoder for data obfuscation [claimed wherein the trained obfuscation transform is configured to obfuscate data designated as being sensitive.]…) Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Weggenmann et al. (US Pub. No. 2023/0185962, hereinafter ‘Weg’) in view of Li et al. (NPL: “A Framework for Enhancing Deep Neural Networks Against Adversarial Malware”, hereinafter ‘Li’) in further view of Singh et al. (NPL: “Disguise Resilient Face Verification”, hereinafter ‘Sin’). Regarding claim 11, the rejection of claim 2 is incorporated and Weg further teaches the medium of claim 2, wherein the machine learning model is an … model; (in 0035: … For example, the encoder 212 and the decoder 216 of the variational autoencoder 210 [e.g. wherein the machine learning model is an … model] may each comprise their own corresponding plurality of long short-term mem­ory cells, or the encoder 212 and the decoder 216 of the variational autoencoder 210 may each comprise their own corresponding plurality of gated recurrent units.) the machine learning model comprises an image-based model, language-based model, or tabular-data-based model; (as processing data in 0009-0010: There are many uses and benefits of collecting and sharing data, such as for applications ranging from predic­tive maintenance over supply planning, personal health monitoring, and disease diagnosis and prevention. However, this processing of data involves risks to data protection. While sensor data [claimed the machine learning model comprises an image-based model, language-based model] itself does not contain immediate perso­nal identifiers (e.g., usernames, IDs, email addresses, or phone numbers) that are subject to strict data protection reg­ulations, it still is highly privacy-sensitive. Characteristic patterns, such as movements in motion sensor data[claimed the machine learning model comprises an image-based model], may reveal the identity of individuals… Raw data [claimed the machine learning model comprises an image-based model, language-based model] usually contains information that is not necessary for the task at hand, but that may be (mis)used to gather additional knowledge. Current data obfuscation tech­niques to reduce the amount of sensitive information that may be leaked when sharing sequential data in order to protect the privacy of individuals or entities to which the data belongs sacrifice either sacrifice privacy protection for data in order to maximize the utility of the data…) the machine learning model is at least one of an inference model, a classification model, a prediction model, or a transformer; (in 0011: … By applying one or more of the solutions disclosed herein, some technical effects of the system and method of the present disclosure are to provide a computer system that is specially-configured to implement a differentially private variational autoencoder for data obfuscation [claimed the machine learning model is at least one of an inference model, a classification model, a prediction model, or a transformer; Examiner notes the transformer as a type of predictive or classification model or inference model for modeling input data as a predicted/inference/classified transformed data output (e.g. obfuscation data)]... The computer system may then sample data from a probability distribution that is based on the mean and the standard deviation of the obfuscated latent space representation. Finally, the computer system may decode the sampled data into output data.) the obfuscation transform is applied to at least a portion of the … model; and in 0011: … By applying one or more of the solutions disclosed herein, some technical effects of the system and method of the present disclosure are to provide a computer system that is specially-configured to implement a differentially private variational autoencoder for data obfuscation … The computer system may then sample data from a probability distribution that is based on the mean and the standard deviation of the obfuscated latent space representation [claimed the obfuscation transform is applied to at least a portion of the … model]. Finally, the computer system may decode the sampled data into output data...) the obfuscation transform is trained by optimization of an objective function, the objective function minimizing … information and minimizing data loss. (in 0019: … In training the variational autoencoder, the encoder 212 may encode input data 205 as a probability distribution in a latent space representation 214 of the input data 205, data from the probability distribution of the latent space representation 214 may be sampled, the sampled data may be decoded by the decoder 216, a recon­struction error may be computed, and the reconstruction error may be backpropagated through the variational enco­der 210. The encoder 212 and decoder 216 may be trained jointly such that output data 225 of the variational autoen­coder 210 minimizes a reconstruction error (e.g., minimiz­ing the differences between the original input values and the reconstructed output values) [claimed the obfuscation transform is trained by optimization of an objective function, the objective function minimizing … information and minimizing data loss].) While Weg teaches the data processing using machine learning techniques as disclosed above. Weg does not expressly teach the following limitations: wherein the machine learning model is an ensemble model; … transform is applied to at least a portion of the ensemble model; and … transform is trained by optimization of an objective function, the objective function minimizing mutual information and minimizing data loss. Li teaches the claim limitations: wherein the machine learning model is an ensemble model; (as depicted in Fig. 1: PNG media_image2.png 520 725 media_image2.png Greyscale And in Pg. 741: Sec. B Turning Principle into a framework: … We propose using an ensemble [claimed wherein the machine learning model is an ensemble model] fen of classifiers … (according to Principle 3), which are trained from random subspace of the original feature space. Each classifier fi is hardened by three countermeasures: input transformation via binarization…) obfuscation transform is applied to at least a portion of the ensemble model; and (in in Pg. 741-741: Sec. B Turning Principle into a framework: … We propose using an ensemble [claimed … obfuscation transform is applied to at least a portion of the ensemble model] fen of classifiers … (according to Principle 3), which are trained from random subspace of the original feature space. Each classifier fi is hardened by three countermeasures: input transformation via binarization… Algorithm 2 integrates all pieces for training individual classifiers. The training procedure consists of the following steps. (i)Given a training set ðX; Y Þ, we randomly select a ratio L of sub-features to the feature set, and then transform X into X - [claimed … obfuscation transform is applied to at least a portion of the ensemble model] via the binarization discussed above… And claimed obfuscation transform in 744: Sec. C Right Col.: …. Moreover, user-specified strings can be obfuscated using encryption and the cipher-text will be decrypted at running time. Further, the attacker can hide public and static system APIs using Java reflection and encryption together. This is shown by the exam-ple in List 1. All of the modifications mentioned above only obfuscate an APK without changing its functionalities…..) obfuscation transform is trained by optimization of an objective function, the objective function minimizing mutual information and minimizing data loss. (as depicted in Fig. 1 and in Pg. 741: Left Col. : … . A DAE … unifies two components: an encoder … that maps an input M(x) to a latent representation … and a decoder … that reconstructs x from r, where the H is the latent representation space and M refers to some operations applied to x (e.g., adding Gaussian noises to x). Vincent et al. [32] showed that the lower bound of the mutual information between x and r is maximized when the reconstruction error is minimized [claimed obfuscation transform is trained by optimization of an objective function, the objective function minimizing mutual information and minimizing data loss.] And claimed obfuscation transform in 744: Sec. C Right Col.: …. Moreover, user-specified strings can be obfuscated using encryption and the cipher-text will be decrypted at running time. Further, the attacker can hide public and static system APIs using Java reflection and encryption together. This is shown by the exam-ple in List 1. All of the modifications mentioned above only obfuscate an APK without changing its functionalities…..) Additionally, Li teaches the limitations: the machine learning model comprises an image-based model, language-based model, or tabular-data-based model; (language base model for processing application data in Pg. 742 Sec IV (A) Data pre-processing: Dataset. The Drebin dataset [56] contains 5615 malicious Android packages (APKs) [claimed the machine learning model comprises …., language-based model, …], and also provides features of 123453 benign examples, together with their SHA256 values but not the examples themselves… Feature Extraction. APK is an archive file containing AndroidManifest.xml [claimed the machine learning model comprises … tabular-data-based model], classes.dex, and others (e.g., res, assets). The manifest file describes an APK’s information, such as the name, version, announcement, library files used by the application. The source code is compiled to build the .dex file which is understandable by the Java Virtual Machine.. ) the machine learning model is at least one of an inference model, a classification model, a prediction model, or a transformer; (in Pg. 741-741: Sec. B Turning Principle into a framework: … We propose using an ensemble [claimed the machine learning model is at least one of an inference model, a classification model, a prediction model, or a transformer; Examiner notes that classifiers are consider a type of prediction./inference model for producing an output from transformed (encoded) input, see Fig. 1] fen of classifiers … (according to Principle 3), which are trained from random subspace of the original feature space. Each classifier fi is hardened by three countermeasures: input transformation via binarization… Algorithm 2 integrates all pieces for training individual classifiers. The training procedure consists of the following steps. (i)Given a training set ðX; Y Þ, we randomly select a ratio L of sub-features to the feature set, and then transform X into X - [claimed the machine learning model is at least one of an inference model, a classification model, a prediction model, or a transformer] via the binarization discussed above…) Additionally, Sin teaches the transforming of input data for processing image data using neural network models as claimed the machine learning model comprises an image-based model, language-based model, or tabular-data-based model; …the obfuscation transform is trained by optimization of an objective function, the objective function minimizing mutual information and minimizing data loss. (in Pg. 3896-3897: Sec. III: Disguised face verification refers to the task of matching a given pair of facial images [claimed the machine learning model comprises an image-based model …] and classifying them as genuine (same class) or imposter (different class). Here, at least one of the images is disguised in nature, i.e., it contains variations due to disguise accessories. Owing to the presence of disguise accessories, often parts of the facial region are obfuscated or have a different appearance than their original self (e.g. due to make-up or plastic surgery)… In order to handle the above highlighted challenges, we pro-pose an Encoder-Decoder model, termed as the Disguise Encoder-Decoder network (DED-Net). An Encoder-Decoder model consists of an encoder network which learns a repre-sentative feature for the given input, followed by a decoder network used to reconstruct the input image from the learned feature. Traditionally, in an encoder-decoder formulation, the optimization function minimizes the error between the input and the reconstructed output using the Euclidean distance…. The proposed model is designed such that it is able to learn representations while encoding the (i) “direction” variations between the image vectors, i.e. the locally altered features due to make-up or illumination (ii) “distribution” of pixel values, i.e. learn features resilient to noise and disturbance due to obfuscation or additional artifacts, while incorporating … Applying Mutual Information between the learned representations ensures similar features for same-class samples; a distinctive property useful for face verification models. Fig. 3 presents a diagrammatic representation of the DED-Net model for an input pair of images. Thus, for a pair of images (X1, X2) and label y, the proposed DED-Net model is mathematically expressed as: PNG media_image3.png 334 682 media_image3.png Greyscale And minimization as depicted in Fig. 3: PNG media_image4.png 400 1312 media_image4.png Greyscale Fig. 3. Overview of the proposed Disguise Resilient Framework on full face images. Pair of images are provided to the DED-Net model for feature extraction, followed by concatenation and input to the classifier. The classifier outputs a score denoting whether the two images belong to the same subject or not. The DED-Net is a convolutional encoder-decoder model, containing convolution (conv.) and deconvolution (deconv.) filters at different layers (represented by squares). It is optimized via the Cosine and Mahalanobis distance based minimization between the input and the reconstruction [claimed …the obfuscation transform is trained by optimization of an objective function, the objective function minimizing mutual information and minimizing data loss], along with the Mutual Information based loss between the features [claimed …the obfuscation transform is trained by optimization of an objective function, the objective function minimizing mutual information and minimizing data loss]. The classifier is a classical neural network containing neurons in each layer (represented as circles). The different colors in the extracted feature signify different values at each position.) Weg, Li, and Sin are analogous art because both involve developing information processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for processing information using an ensemble/ a collection of neural networks as disclosed by Le with the method of developing information processing with random noise parameters with a neural network learning model as disclosed by Weg. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Sin, Li, and Weg to utilized develop deep neural networks that are robust against grey-box attack, (Li, Abstract) and develop neural network learning systems for learning variation in feature space generated by disguised (obfuscated) and non-disguised images data using mutual information (Sin, Abstract); Doing so helps to improve the model accuracy for perform inference task, (Li and Sin, Abstract). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zadeh et al. (US 20200184278 ): teaches Gaussian noise is a stochastic noise, in [1780] In one embodiment, noise is incorporated into the rendering in order to make the network more resilient to noise. In one embodiment, a stochastic noise (e.g., Gaussian) is applied to the rendering, e.g., in illumination, intensity, texture, color, contrast, saturation, edges, scale, angles, perspective, projection, skew, rotation, or twist, across or for portion(s) of the image. Xiong et al. (US 12001577, hereinafter Xi): teaches adding noise to obfuscate the plaintext output. As described above in connection the process can be performed by a device running a secure service or any suitable device. The inferencing output is the resultant output of a machine learning model, which may comprise a neural network. The inferencing output data comprises plaintext and ciphertext which can be structured. Chakraborty et al. (US 20190156061): teaches adding a noise propagation module is a module that adds noise to the data and the noise generation is performed using an anonymizer placed between an encoder and a decoder. The anonymize ensures that appropriate noise is added to codes such that they are clustered or grouped and each cluster has at least k-members. The anonymizer thus generates noisy code. The noisy codes are then fed to the decoder, which then reconstructs data. The reconstructed data is then passed according to a policy (e.g., through a nearest neighbor decoder) Chhabra et al. (US 20190354806): teaches the addition of one or more noise layers as depicted in Fig. 2to encode input data features PNG media_image5.png 492 648 media_image5.png Greyscale Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWATOSIN ALABI/Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579409
IDENTIFYING SENSOR DRIFTS AND DIVERSE VARYING OPERATIONAL CONDITIONS USING VARIATIONAL AUTOENCODERS FOR CONTINUAL TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12572814
ARTIFICIAL NEURAL NETWORK BASED SEARCH ENGINE CIRCUITRY
2y 5m to grant Granted Mar 10, 2026
Patent 12561570
METHODS AND ARRANGEMENTS TO IDENTIFY FEATURE CONTRIBUTIONS TO ERRONEOUS PREDICTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12547890
AUTOREGRESSIVELY GENERATING SEQUENCES OF DATA ELEMENTS DEFINING ACTIONS TO BE PERFORMED BY AN AGENT
2y 5m to grant Granted Feb 10, 2026
Patent 12536478
TRAINING DISTILLED MACHINE LEARNING MODELS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
85%
With Interview (+26.3%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 199 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month