DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
This office action responds to the amendments filed on February 23, 2026 for application 18/529,923. Claims 2, 18, and 22-24 are amended, claims 20 and 21 are cancelled, and claims 3-9, 11-17, 19, and 22-32 are withdrawn. Claims 2-19 and 22-37 remain pending in the application.
Response to Arguments
The Examiner has fully considered the Applicant’s arguments filed on February 23, 2026, and the Examiner responds as provided below.
Regarding the Applicant’s response at page 9 of the Remarks that concerns the objection to claim 18, the amendment to claim 18 resolves the issue and the objection is withdrawn.
Regarding the Applicant’s response at page 9 of the Remarks that concerns the double patenting rejection of claims 2, 10, 18, 21, and 33-37, the amendment to claim 2 resolves the double patenting issue and the double patenting rejection is withdrawn.
Regarding the Applicant’s response at page 9 of the Remarks that concerns the § 112(b) rejection of claim 21, the cancellation of the claim renders the rejection moot, and the § 112(b) rejection is withdrawn.
Regarding the Applicant’s response at pages 9-12 of the Remarks that concerns the § 101, the Examiner respectfully finds that the Applicant’s arguments are not persuasive. The Applicant states, “Amended claim 2 also supplies an inventive concept under Step 2B through the specific and unconventional integration of NLP-produced privacy models with automated determination and enforcement in data-processing workflows.” However, when claim 2 is reviewed, nothing within claim 2 seemingly suggests the enforcement of privacy requirements within data-processing workflows. The presence of a limitation associated with an enforcement action potentially provides a practical application under Step 2A, Prong II. Otherwise, the lack of an enforcement action within the claim bolsters the Examiner’s finding that claim 2 fails to satisfy Steps 2A and 2B.
Regarding the Applicant’s response at pages 12 and 13 of the Remarks that concerns the § 103 rejection, the Applicant states, “Rudden is not ‘applying natural language processing to the data privacy requirements’ …but rather is using natural language processing to ‘detect’ private content.” (emphasis retained). Remarks at 13. However, the Examiner respectfully maintains that in order to “detect private content” via NLP, then NLP is being applied to “data privacy requirements.” In other words, for NLP to detect private content, then data privacy requirements must exist for NLP to ultimately distinguish between private and public data.
The Applicant further argues that “Rudden does not disclose ‘generating the data privacy requirements as the machine-readable representation.” In essence, the issue here is that Applicant seemingly fails to appreciate the breadth of the limitation, “generating the data privacy requirements as the machine-readable representation.” Under the broadest reasonable interpretation of the claim, “machine-readable representation” comprises any computer instruction that is executed by a processor. Given this interpretation, any “data privacy requirement” ever “generated” and subsequently executed by a processor as a “machine-readable representation” reads on this limitation. Accordingly, Rudden’s teaching of “NLP-based detection mechanism to trigger rules in the system” suggests a “machine-readable representation” that is inherent in the operation of a computer, and the “NLP-based detection mechanism” is associated with the “data privacy requirements.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Independent Claim 2
Independent claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The claim recites a method that encompasses a mental process. Under Step 2A, Prong I, “the ‘mental processes’ abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions.”). See MPEP § 2106.04(a)(2)(III). Additionally, “a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’” is an abstract idea. See id.
In claim 2, two steps of the claimed method can be conducted in the human mind or with pen and paper. First, “generating a machine-readable representation of processing operations” can be conducted with a pen and paper, such as by writing down computer programming statements that can ultimately be input into a computer and read by a machine. Second, “determining one or more privacy-preserving techniques to be applied to the data based on data privacy requirements” can be conducted as a mental process, i.e., the determination step can be conducted mentally.
This judicial exception is not integrated into a practical application, where Step 2A, Prong of the eligibility analysis asks “does the claim recite additional elements that integrate the judicial exception into a practical application?” See MPEP § 2106.04(II)(A)(2). Under Step 2A, Prong II, a practical application emerges when the claimed subject matter, when considered as a whole, “meaningfully limits the claim by going beyond generally linking the use of the judicial exception to a particular technological environment, and thus transforms a claim into patent-eligible subject matter.” MPEP § 2106.04(d)(1). Here, claim 2 is linked to a “computer-implemented method” and “machine-readable representation,” but the claimed subject matter lacks any practical application. Claim 2 generically recites “applying the determined one or more privacy-preserving techniques,” but fails to recite any specific limitations that yields a practical application. See MPEP § 2106.04(d) (stating “The courts have also identified limitations that did not integrate a judicial exception into a practical application: Merely reciting the words ‘apply it’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea….”). Claim 2 also recites the limitation of “applying natural language processing,” but this limitation under Step 2A, Prong II fails to yield a practical application because the privacy-preserving technique is still applied broadly as to where a specific, practical application is absent from the claimed subject matter. See MPEP § 2106.04(II)(A)(2).
With respect to Step 2B, claim 2 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. See MPEP § 2106.04. As noted above, claim 2 generically recites computer limitations, and accordingly, the claim “ha[s] broad applicability across many fields of endeavor [and does] not provide meaningful limitations that integrate a judicial exception into a practical application or amount to significantly more.” See MPEP § 2106.05(f). An inventive concept can emerge when the claimed subject matter yields “improvements to the functioning of a computer.” See MPEP § 2106.05 (I)(A). In essence, claim 2 amounts to little more than a “drafting effort” to broadly claim the concept of a “privacy-preserving technique” that is “applied” to a wide variety of technological environments or fields of endeavor. Similar to Step 2A that recognizes a “drafting effort,” see MPEP § 2106.04, Step 2B also contemplates that “patent eligibility [does not] ‘depend simply on the draftsman’s art.’” See MPEP § 2106.05(I)(A). Claim 2 also recites the application of natural language processing, which is a well-understood, routine, conventional activity. See MPEP § 2106.05(I)(A). Furthermore, Claim 2 recites “generating the data privacy requirements as the machine-readable representation,” but as discussed above, the generation of a “machine-readable representation” is a well-understood, routine, conventional activity. See id. Accordingly, for the reasons provided above, the claim 2 fails to amount to significantly more under Step 2B.
Dependent Claim 10
Dependent claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 10 recites “automatically determined from a set of predefined techniques,” where the “determination” represents another step that can be performed mentally. For substantially the same reasons provided above for claim 2, claim 10 fails to encompass a practical application or an inventive concept that thereby satisfy the criteria of Step 2A, Prong II and Step 2B, respectively.
Dependent Claim 18
Dependent claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 18 recites “receiving information on processing operations that are to be performed on the data,” where the “receiving” represents another step that can be performed mentally (i.e., the mind can mentally receive information audibly). For substantially the same reasons provided above for claim 2, claim 18 fails to encompass a practical application or an inventive concept that thereby satisfy the criteria of Step 2A, Prong II and Step 2B, respectively.
Independent Claims 33, 34, and 36
Independent claims 33, 34, and 36 are substantially similar to independent claim 2 and dependent claims 18 and 20. Accordingly, claims 33, 34, and 36 are rejected under § 101 for the same reasons presented above.
Dependent Claims 35 and 37
Dependent claims 35 and 37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 35 and 37 do not recite an additional step that can be performed mentally. Claims 35 and 37 recite the limitation of an “integrated circuit,” but this limitation under Step 2A, Prong II fails to yield a practical application because the privacy-preserving technique is still being applied broadly as to where a specific, practical application is absent from the claimed subject matter. See MPEP § 2106.04(II)(A)(2). This limitation under Step 2B also fails to yield an inventive concept because the application of an integrated circuit is a well-understood, routine, conventional activity. See MPEP § 2106.05(I)(A). Accordingly, claims 35 and 37 fail to encompass a practical application or an inventive concept that thereby satisfy the criteria of Step 2A, Prong II and Step 2B, respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The following conventions apply to the mapping of the prior art to the claims:
Italicized text – claim language.
Parenthetical plain text – Examiner’s citation and explanation.
Citation without an explanation – an explanation has been previously provided for the respective limitation(s).
Quotation marks – language quoted from a prior art reference.
Underlining – language quoted from a claim.
Brackets – material altered from either a prior art reference or a claim, which includes the Examiner’s explanation that relates a claim limitation to the quoted material of a reference.
Braces – a limitation taught by another reference, but the limitation is presented with the mapping of the instant reference for context.
Numbered superscript – a first phrase to be moved upwards to the primary reference analysis.
Lettered superscript – a second phrase to be moved after the movement of the first phrase from which it was lifted, or more succinctly, move numbered material first, lettered material last.
A. Claims 2, 18, and 33-37 are rejected under 35 U.S.C. 103 as being unpatentable over Kodavanji et al. (US 2020/0250340, “Kodavanji”) in view of Dash et al. (US 10,963,590, “Dash”).
Regarding Claim 2
Kodavanji discloses
A computer-implemented method (Fig. 1, ¶ [0010], “Further, the computing systems may handle [via a computer implemented method] the PII, such as store, process, backup, or transfer the PII for various reasons.”) comprising:
generating a machine-readable representation of processing operations (¶¶ [0031]-[0032], “To facilitate compliance with the security rules during the handling of the first PII 204, the hosting system 101 may add [generate] metadata [machine-readable] tags to the first PII 204.”; and ¶ [0059], “For instance, based on the first metadata tag 210, if the compliance module 402 determines that the first PII 204 is to be protected using a particular minimum encryption level [with each encryption level representing one processing operation], the compliance module 402 can utilize the data encryption module 418 to encrypt the first PII 204 accordingly [amongst a plurality of processing operations].”),
wherein the processing operations are based on one or more algorithms applied to data (¶ [0059], “Further, based on the first metadata tag 210 [and associated encryption processing operation], the compliance module 402 may determine that the first PII [data] 204 is to be protected using [based on] a particular sanitization algorithm, and may instruct the data sanitization module 416 to utilize the appropriate sanitization algorithm to sanitize the first PII 204.”);
1 …determining one or more privacy-preserving techniques to be applied to the data (¶ [0059], “Further, based on the first metadata tag 210, the compliance module 402 may determine that the first PII [data] 204 is to be protected using a particular sanitization algorithm [as a privacy-preserving technique],…”) based on data privacy requirements for the data and the machine-readable representation of the processing operations to be performed on the data (¶¶ [0058]-[0059], “The compliance module 402 may handle the PII [data] received from the hosting system 101 based on the metadata tags [as machine-readable representations that are associated with particular processing operations to be performed on the data/PII] such that the compliance rules [data privacy requirements] associated with the PII [for the data] are complied with.”);
2 …applying the determined one or more privacy-preserving techniques (¶ [0059], “For instance, based on the first metadata tag 210, if the compliance module 402 determines that the first PII 204 is to be protected using a particular minimum encryption [privacy-preserving technique] level, the compliance module 402 can utilize the data encryption module 418 to encrypt [apply the privacy-preserving technique] the first PII 204 accordingly.”);
3 …; and
4 ….
Kodavanji doesn’t disclose
1 automatically…
2 automatically…
3 applying natural language processing to the data privacy requirements;
4 generating the data privacy requirements as the machine-readable representation.
Dash, however, discloses
1 automatically… (Col. 5:5-15, “Accordingly, anonymization logic 160 is provided on anonymization server 135. Briefly, the anonymization logic 160 may be configured to cause the server 135 to automatically anonymize (e.g., identically replace the sensitive information of) the service request document(s) stored in service request document repository 155 in order to generate anonymized service request documents 165.”, and further noting that it would be obvious to one skilled in the art to “automatically” perform functions with a computer, as opposed to relying upon manual actions by an administrator)
2 automatically… (Col. 5:5-15)
Regarding the combination of Kodavanji and Dash, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the PII security system of Kodavanji to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the PII security system of Kodavanji, upon which the claimed invention can be seen as an “improvement” through the use of a PII automation feature;
2) the prior art contained a “comparable” system, namely the data anonymization system of Dash, that has been improved in the same way as the claimed invention through the PII automation feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the PII automation feature to the base PII security system of Kodavanji, and the results would have been predictable to one of ordinary skill in the art.
Rudden, however, discloses
3 applying natural language processing to the data privacy requirements (Col. 10:43-53, “Further features described herein include the use [application] of natural language processing and machine learning algorithms to harvest dynamic based privacy requirements [i.e., the “harvesting” relies upon the application of NPL to data privacy requirements] (‘locality’) and creation of contextual-based ontological systems that are employed through a rule-based engine.”; and Col. 7:51-8:6, “Contextual understanding [such as natural language processing] informs of privacy requirements applicable to the digital information, and the selected training sets [or alternatively the application of NLP] can be reflective of those privacy requirements applicable to digital information.”);
4 generating the data privacy requirements as the machine-readable representation (Col. 10:43-53, “A machine learning capability is leveraged [generated] to build an ontology that feeds an NLP-based detection mechanism [via a machine-readable representation of the data privacy requirements] to trigger rules in the system.”).
Regarding the combination of Kodavanji-Dash and Rudden, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the PII security system of Kodavanji-Dash to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the PII security system of Kodavanji-Dash, upon which the claimed invention can be seen as an “improvement” through the use of a NLP feature;
2) the prior art contained a “comparable” system, namely the PII system of Rudden, that has been improved in the same way as the claimed invention through the NLP feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the NLP feature to the base PII security system of Kodavanji-Dash, and the results would have been predictable to one of ordinary skill in the art.
Regarding Claim 18
Kodavanji in view of Dash (“Kodavanji-Dash”) discloses the computer-implemented method of claim 2, and Kodavanji further discloses
further comprising:
receiving information on processing operations that are to be performed on data (Fig. 4, ¶ [0053], “The data processing center 208 may be connected to the hosting system 101 through a communication network (not shown in FIG. 4) to receive data 400 from the hosting system 101. The data 400 may include the encrypted tags, such as the first metadata tag [that possesses information on processing operations to be performed on the data] 210 and the second metadata tag 214, and the encrypted PII, such as the first PII 204.”),
the information specifying the one or more algorithms to be applied to the data (¶ [0059], “Further, based on the first metadata tag [possessing information] 210, the compliance module 402 may determine that the first PII [data] 204 is to be protected using a particular [specified] sanitization algorithm, and may instruct the data sanitization module 416 to utilize the appropriate sanitization algorithm to sanitize the first PII 204.”).
Regarding Claim 33
With respect to claim 33, a corresponding reasoning as given earlier for claim 2 applies, mutatis mutandis, to the subject matter of claim 33. Therefore, claim 33 is rejected, for similar reasons, under the grounds set forth for claim 2.
Regarding Independent Claim 34
With respect to claim 34, a corresponding reasoning as given earlier for claims 2 and 18 applies, mutatis mutandis, to the subject matter of claim 34. Therefore, claim 34 is rejected, for similar reasons, under the grounds set forth for claims 2 and 18.
Regarding Claim 35
Kodavanji-Dash discloses the processing device for configuring data protection settings of a system of claim 34, and Kodavanji further discloses
A processing device for configuring data protection settings of a system (¶¶ [0058]-[0059]), comprising:
an integrated circuit adapted to perform the method of claim 34 (¶¶ [0018]-[0019], “FIG. 1 illustrates a system 100 to facilitate compliance with security rules associated with Personally Identifiable Information (PII), according to an example implementation of the present subject matter.”; and “The hosting system 101 includes a processor [integrated circuit] 102 and a memory 104 coupled to the processor 102.”).
Regarding Independent Claim 36
With respect to claim 36, a corresponding reasoning as given earlier for claims 2 and 18 applies, mutatis mutandis, to the subject matter of claim 36. Therefore, claim 36 is rejected, for similar reasons, under the grounds set forth for claims 2 and 18.
Regarding Claim 37
With respect to claim 37, a corresponding reasoning as given earlier for claim 35 applies, mutatis mutandis, to the subject matter of claim 37. Therefore, claim 37 is rejected, for similar reasons, under the grounds set forth for claim 35.
B. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kodavanji in view of Dash, and further in view of Kothari et al. (US 8,694,646, “Kothari”).
Regarding Claim 10
Kodavanji-Dash discloses the computer-implemented method of claim 2, and Kodavanji further discloses
wherein the one or more privacy-preserving techniques (¶ [0059]) are…1
Dash, however, discloses
1 automatically…a (Col. 5:5-15)
Regarding the combination of Kodavanji and Dash, the rationale to combine is the same as provided for claim 2 due to the overlapping subject matter of claims 2 and 10.
Kodavanji-Dash doesn’t disclose
a …determined from a set of predefined techniques.
Kothari, however, discloses
a …determined from a set of predefined techniques (Col. 7:45-52, “The user may select one or more of the [predefined] anonymization strategies [and corresponding techniques] to be applied to various data fields of the application, using the user computer.”; Col. 9:14-33, “The anonymization technique selected [determined] for a data field may be based upon multiple factors. One of the factors is level of desired security. One of the other factors is data attribute preservation for the data field.”; and Col. 9:34-45, “Anonymization techniques may be broadly divided into two [predefined] categories. One, a token based anonymization. The token based anonymization may be implemented in the tokenization module 412 and may require local storage of the tokens in the token vault 414. Another technique is to use a symmetric key encryption based anonymization.”).
Regarding the combination of Kodavanji-Dash and Kothari, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the PII security system of Kodavanji-Dash to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C).
To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C):
1) the prior art contained a base system, namely the PII security system of Kodavanji-Dash, upon which the claimed invention can be seen as an “improvement” through the use of a predefined techniques feature;
2) the prior art contained a “comparable” system, namely the data anonymization system of Kothari, that has been improved in the same way as the claimed invention through the predefined techniques feature; and
3) one of ordinary skill in the art could have applied the known improvement technique of applying the predefined techniques feature to the base PII security system of Kodavanji-Dash, and the results would have been predictable to one of ordinary skill in the art.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to D'ARCY WINSTON STRAUB whose telephone number is (303)297-4405. The examiner can normally be reached Monday-Friday 9:00-5:00 Mountain Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM KORZUCH can be reached at (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/D'Arcy Winston Straub/Primary Examiner, Art Unit 2491