Prosecution Insights
Last updated: April 18, 2026
Application No. 18/680,072

DIFFERENTIAL PRIVACY WITH UILITY INDICATOR

Final Rejection §103
Filed
May 31, 2024
Examiner
STRAUB, D'ARCY WINSTON
Art Unit
2491
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
168 granted / 218 resolved
+19.1% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
245
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
6.1%
-33.9% vs TC avg
§112
24.3%
-15.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 218 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments This office action responds to the amendments filed on January 20, 2026 for application 18/680,072. Claims are 1, 7-8, and 14-20 are amended, and claims 1-20 remain pending in the application. Response to Arguments The Examiner has fully considered the Applicant’s arguments filed on January 20, 2026, and the Examiner responds as provided below. Regarding the Applicant’s response at page 7 of the Remarks that concerns the objection to the drawings, the amendment to the specification addresses the attendant issue and the objection is withdrawn. Regarding the Applicant’s response at pages 7-9 of the Remarks that concerns the § 101 rejection, the amendments to the independent claims are sufficient to bring the claims within the purview of eligible subject matter, and the § 101 rejection is withdrawn. Regarding the Applicant’s response at page 10 of the Remarks that concerns the § 112(b) rejection, the amendments to the claims adequately address the issue of indefiniteness, and the § 112(b) rejection is withdrawn. Regarding the Applicant’s response at pages 10-12 of the Remarks that concerns the § 103 rejection, the Applicant’s arguments in conjunction with the claim amendments are persuasive, and consequently the Examiner conducted a new prior art search. The Applicant’s arguments are now moot with respect to the pending claims because the arguments do not apply to some of the references currently used in the rejection of the aforementioned claims as detailed below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The following conventions apply to the mapping of the prior art to the claims: Italicized text – claim language. Parenthetical plain text – Examiner’s citation and explanation. Citation without an explanation – an explanation has been previously provided for the respective limitation(s). Quotation marks – language quoted from a prior art reference. Underlining – language quoted from a claim. Brackets – material altered from either a prior art reference or a claim, which includes the Examiner’s explanation that relates a claim limitation to the quoted material of a reference. Braces – a limitation taught by another reference, but the limitation is presented with the mapping of the instant reference for context. Numbered superscript – a first phrase to be moved upwards to the primary reference analysis. Lettered superscript – a second phrase to be moved after the movement of the first phrase from which it was lifted, or more succinctly, move numbered material first, lettered material last. A. Claims 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over Aravamudan et al. (US 12,182,179, “Aravamudan”) in view of Dwork et al. (US 2007/0143289, “Dwork”), and further in view of Curcio et al. (US 2017/0169253, “Curcio”). Regarding Claim 1 Aravamudan discloses A method (Fig. 1, abstract) comprising: receiving, …1, a query for a computation result from a computation on a dataset in which privacy is to be maintained (Col. 8:12-45, “In another embodiment, processor may temporarily mask one or more private data elements of plurality of private data elements in real-time during access or query operations. As a non-limiting example, when a request to view or process plurality of private data elements 116 is made, sensitive information may be automatically masked to the user based on the user's access level, wherein plurality of private data elements [privacy] may remain intact [maintained] and unaltered within database 112.”; and Col. 12:12-29, “Processor 104 may generate [a computation after receiving a query] a set of obfuscated data elements 124 by sample from a gaussian noise distribution and add the sampled noise to private data elements [or computation result] describing the ages and treatment outcome values.”); generating a first random noise value according to particular differential privacy parameters (Col. 12:12-29, “Processor 104 may generate a set of obfuscated data elements 124 by sample from a gaussian [first random] noise distribution and add the sampled noise [value] to private data elements [or computation result] describing the ages and treatment outcome values.”; and Col. 16:58-17:17, “As a non-limiting example, verifying first distance measure 132 against distance range 136 may include verifying first distance measure 132 is greater than minimum threshold e.g., a minimum distance Dmin [as a particular differential privacy parameter], and is less than a maximum threshold e.g., a maximum distance Dmax, from at least a pre-determined number M [as another particular differential privacy parameter] of private data elements of plurality of private data elements 116.”); generating a noisy result by combining the computation result with the generated first random noise value (Col. 12:12-29, “With continued reference to FIG. 1, in one or more embodiments, processor 104 may be configured to apply a gaussian noise [as generated random noise to generate a noisy result], uniform noise, Laplacian noise, and/or the like to one or more numerical or textural values in plurality of private data elements [computation result] in a deidentified medical dataset to prevent an inference of specific patient information from biometric or health measurements.”); determining a utility indicator value based on a distance between the noisy result and the computation result (Col. 16:58-17:17, “…while Dmax [as a utility indicator] may ensure obfuscated data elements [the noisy result] do not deviate too much from private data elements [computation result] thereby preserving the utility for further processing steps as described below.”; and “As a non-limiting example, verifying first distance measure 132 against distance range 136 may include verifying first distance measure 132 is greater than minimum threshold e.g., a minimum distance Dmin, and is less than a maximum threshold e.g., a maximum distance Dmax, from at least a pre-determined number M of private data elements of plurality of private data elements 116.”); in response to the utility indicator value being greater than a predetermined threshold (Col. 19:26-61, “With continued reference to FIG. 1, as another non-limiting example, maximum threshold Dmax -[utility indicator] and pre-determined number M of private data elements of plurality of private data elements may be determined based on an obfuscation risk tolerance level (i.e., obfuscation parameter 148).”, i.e., if the distance satisfies the threshold value of Dmax -as the utility indicator, the binary value is true or 1 and false or 0 otherwise): 2 …; 3 …; and determining an updated utility indicator (Col. 19:26-61, “With continued reference to FIG. 1, as another non-limiting example, maximum threshold Dmax -[utility indicator] and pre-determined number M of private data elements of plurality of private data elements may be determined [pdated] based on an obfuscation risk tolerance level [as changed as disclosed by ] (i.e., obfuscation parameter 148).”); 4 …; and 5 …. Aravamudan doesn’t disclose 1 …, from an entity…, 2 generating a second random noise value according to the particular differential privacy parameters; 3 generating an updated noisy result by combining the computation result with the second random noise value; 4 releasing the updated noisy result along with the updated utility indicator value; 5 providing the updated noisy result and the updated utility indicator value to the entity. Dwork, however, discloses 1 …, from an entity… (¶ [0054], “A privacy principal that wished to calculate the privacy parameter that is being used with her data might carry out a method such as that illustrated in FIG. 4. Such a method is for determining an amount of privacy guaranteed to privacy principals supplying data, wherein said data is used in calculating a collective noisy output.”), 4 releasing the noisy result along with the utility indicator value (Fig. 6, ¶¶ [0066]-[0068], “As described above, the released noisy [result] output may be computed using a privacy parameter [DMIN, D--MAX as disclosed by Aravamudan] provided [released] by a privacy principal, or provided [released] by an administrator and published [released] through interface 640, or derived from a plurality of privacy principal entries, such as by using a most restrictive privacy parameter selected from a plurality of privacy parameters.”). 5 providing the updated noisy result and the updated utility indicator value to the entity (¶¶ [0052]-[0054], “Once a query is performed on said data associated with a plurality of privacy principals, the collective output from the query can be calculated, and a noise value from the calculated distribution can be added to the collective output to produce a noisy collective output [noisy result] 305. finally, the collective noisy output can be disclosed [provided] 306, as can the noise distribution.”; and “One interesting aspect of the invention is that it permits a useful backwards operation in which, for a given differential diameter and a known noise distribution, the value of the privacy parameter epsilon can be determined. Thus, systems using the invention that disclose a query and a noise distribution also verifiably disclose the value of the privacy parameter that was used. Privacy principals [entities] can thus be informed [of the utility indicator value] or calculate for themselves the degree of privacy [and complementary utility] that is being used with their data.”). Regarding the combination of Aravamudan and Dwork, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the differential privacy system of Aravamudan to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C). To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C): 1) the prior art contained a base system, namely the differential privacy system of Aravamudan, upon which the claimed invention can be seen as an “improvement” through the use of a parameter release feature; 2) the prior art contained a “comparable” system, namely the differential privacy system of Dwork, that has been improved in the same way as the claimed invention through the parameter release feature; and 3) one of ordinary skill in the art could have applied the known improvement technique of applying the differential privacy feature to the base differential privacy system of Aravamudan, and the results would have been predictable to one of ordinary skill in the art. Curcio, however, discloses 2 generating a second random noise value according to the particular differential privacy parameters (Fig. 3, ¶ [0079], “In a differential privacy system such as privacy-aware query management system 100, the epsilon value [as a further particular differential privacy parameter] is decreased as more queries or query operations are run against a dataset. For example, the dataset of FIG. 3 may be given an epsilon of 1.00, but in order to ensure that ε-differential privacy is maintained across multiple queries, epsilon is reduced and additional [second random] noise [value] is added to subsequent queries.”); 3 generating an updated noisy result by combining the computation result with the second random noise value (Fig. 3, ¶ [0079], “In a differential privacy system such as privacy-aware query management system 100, the epsilon value is decreased as more queries or query operations are run against a dataset [to produce a computation result]. For example, the dataset of FIG. 3 may be given an epsilon of 1.00, but in order to ensure that ε-differential privacy is maintained across multiple queries [computation results], epsilon is reduced and additional [second random] noise [value] is added to subsequent queries [to generate an updated noisy]. For example, the dataset of FIG. 3 may be given an epsilon of 1.00, but in order to ensure that ε-differential privacy is maintained across multiple queries, epsilon is reduced and additional noise is added to subsequent queries. This can result in a second query returning results using a Laplacian distribution with ε=0.50 and a third query using ε=0.10.”); Regarding the combination of Aravamudan-Dwork and Curcio, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the differential privacy system of Aravamudan-Dwork to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C). To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C): 1) the prior art contained a base system, namely the differential privacy system of Aravamudan-Dwork, upon which the claimed invention can be seen as an “improvement” through the use of an added noise feature; 2) the prior art contained a “comparable” system, namely the differential privacy system of Curcio, that has been improved in the same way as the claimed invention through the added noise feature; and 3) one of ordinary skill in the art could have applied the known improvement technique of applying the added noise feature to the base differential privacy system of Aravamudan-Dwork, and the results would have been predictable to one of ordinary skill in the art. Regarding Claim 2 Aravamudan in view of Dwork, and further in view of Curcio (“Aravamudan-Dwork-Curcio”) discloses the method of claim 1, and Aravamudan further discloses wherein the distance represents one of an absolute distance or a relative distance between the noisy result and the computation result (Col. 16:58-17:17, “As a non-limiting example, verifying first distance measure 132 against distance range 136 may include verifying first distance measure 132 is greater than minimum threshold e.g., a minimum distance Dmin [as an absolute or relative distance], and is less than a maximum threshold e.g., a maximum distance Dmax, from at least a pre-determined number M of private data elements of plurality of private data elements 116.”). Regarding Claim 3 Aravamudan-Dwork-Curcio discloses the method of claim 1, and Aravamudan further discloses wherein the utility indicator value is binary (Col. 19:26-61, “With continued reference to FIG. 1, as another non-limiting example, maximum threshold Dmax and pre-determined number M of private data elements of plurality of private data elements may be determined based on an obfuscation risk tolerance level (i.e., obfuscation parameter 148).”, i.e., the “maximum threshold” can assume binary values—either it is above the threshold and false or 0, or it is below the threshold and true or 1). Regarding Claim 4 Aravamudan-Dwork-Curcio discloses the method of claim 3, and Aravamudan further discloses wherein the utility indicator has a first value if the distance satisfies a threshold value and a second value otherwise (Col. 19:26-61, “With continued reference to FIG. 1, as another non-limiting example, maximum threshold Dmax -[utility indicator] and pre-determined number M of private data elements of plurality of private data elements may be determined based on an obfuscation risk tolerance level (i.e., obfuscation parameter 148).”, i.e., if the distance satisfies the threshold value of Dmax -as the utility indicator, the binary value is true or 1 and false or 0 otherwise). Regarding Claim 5 Aravamudan-Dwork-Curcio discloses the method of claim 4, and Aravamudan further discloses wherein the noisy result is labeled as usable if the corresponding utility indicator has the first value and the noisy result is labeled as non-usable if the corresponding utility indicator has the second value (Col. 16:58-17:17, “…while Dmax [utility indicator] may ensure obfuscated data elements do not deviate too much from private data elements [and are thereby usable] thereby preserving the utility for further processing steps as described below.”). Regarding Claim 6 Aravamudan-Dwork-Curcio discloses the method of claim 1, and Dwork further discloses further comprising: applying a privacy preserving mechanism to the utility indicator value before release (Fig. 6, ¶¶ [0066]-[0068], “Demonstrations [as a privacy preserving mechanism] may be provided by 643 to assist in selecting an appropriate parameter [utility indicator value]. For example, a demonstration may be given of how to choose a privacy parameter [the value for DMAX] associated with an amount of privacy loss that is acceptable to a privacy principal.”). Regarding the combination of Aravamudan and Dwork, the rationale to combine is the same as provided for claim 1 due to the overlapping subject matter of claims 1 and 6. Regarding Claim 7 Aravamudan-Dwork-Curcio discloses the method of claim 6, and Dwork further discloses wherein the privacy preserving mechanism (Fig. 6, ¶¶ [0066]-[0068]) is applied such that for a given released first value there is a probability that the real utility indicator value is the second value (Figs. 2A-2C, ¶¶ [0043]-[0044], “A differential diameter and privacy parameter can be used in calculating each of the distributions in FIG. 2A-2C. A large differential diameter value will widen the distribution, increasing the probability that larger x-axis (noise) values will be used. Conversely, a small differential diameter will decrease the likelihood of large noise values.”). Regarding the combination of Aravamudan and Dwork, the rationale to combine is the same as provided for claim 1 due to the overlapping subject matter of claims 1 and 7. B. Claims 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over Aravamudan et al. (US 12,182,179, “Aravamudan”) in view of Dwork et al. (US 2007/0143289, “Dwork”). Regarding Claim 8 Aravamudan discloses A system (Fig. 1, abstract) comprising: one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations (Fig. 8, Col. 57:1-17, “It is also contemplated that multiple computing devices may be utilized to implement a specially configured set of instructions for causing one or more of the devices to perform any one or more of the aspects and/or methodologies of the present disclosure. Computer system 900 includes a processor 904 and a memory 908 that communicate with each other, and with other components, via a bus 912. Bus 912 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.”; and Fig. 9, Col. 58:3-23, “Computer system 900 may also include a storage device 924. Examples of a storage device (e.g., storage device 924) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof.”) comprising: receiving, …1, a query for a computation result from a computation on a dataset in which privacy is to be maintained (Col. 8:12-45, “In another embodiment, processor may temporarily mask one or more private data elements of plurality of private data elements in real-time during access or query operations. As a non-limiting example, when a request to view or process plurality of private data elements 116 is made, sensitive information may be automatically masked to the user based on the user's access level, wherein plurality of private data elements [privacy] may remain intact [maintained] and unaltered within database 112.”; and Col. 12:12-29, “Processor 104 may generate [a computation after receiving a query] a set of obfuscated data elements 124 by sample from a gaussian noise distribution and add the sampled noise to private data elements [or computation result] describing the ages and treatment outcome values.”); generating a random noise value according to particular differential privacy parameters (Col. 12:12-29, “Processor 104 may generate a set of obfuscated data elements 124 by sample from a gaussian [random] noise distribution and add the sampled noise [value] to private data elements [or computation result] describing the ages and treatment outcome values.”; and Col. 16:58-17:17, “As a non-limiting example, verifying first distance measure 132 against distance range 136 may include verifying first distance measure 132 is greater than minimum threshold e.g., a minimum distance Dmin [as a particular differential privacy parameter], and is less than a maximum threshold e.g., a maximum distance Dmax, from at least a pre-determined number M [as another particular differential privacy parameter] of private data elements of plurality of private data elements 116.”); generating a noisy result by combining the computation result with the generated random noise value (Col. 12:12-29, “With continued reference to FIG. 1, in one or more embodiments, processor 104 may be configured to apply a gaussian noise [as generated random noise to generate a noisy result], uniform noise, Laplacian noise, and/or the like to one or more numerical or textural values in plurality of private data elements [computation result] in a deidentified medical dataset to prevent an inference of specific patient information from biometric or health measurements.”); determining a utility indicator value based on a distance between the noisy result and the computation result (Col. 16:58-17:17, “…while Dmax [as a utility indicator] may ensure obfuscated data elements [the noisy result] do not deviate too much from private data elements [computation result] thereby preserving the utility for further processing steps as described below.”; and “As a non-limiting example, verifying first distance measure 132 against distance range 136 may include verifying first distance measure 132 is greater than minimum threshold e.g., a minimum distance Dmin, and is less than a maximum threshold e.g., a maximum distance Dmax, from at least a pre-determined number M of private data elements of plurality of private data elements 116.”); 2 …; and 3 …. Aravamudan doesn’t disclose 1 …, from an entity…, 2 releasing the noisy result along with the utility indicator value. 3 providing the noisy result and the utility indicator value to the entity. Dwork, however, discloses 1 …, from an entity… (¶ [0054], “A privacy principal that wished to calculate the privacy parameter that is being used with her data might carry out a method such as that illustrated in FIG. 4. Such a method is for determining an amount of privacy guaranteed to privacy principals supplying data, wherein said data is used in calculating a collective noisy output.”), 2 releasing the noisy result along with the utility indicator value (Fig. 6, ¶¶ [0066]-[0068], “As described above, the released noisy [result] output may be computed using a privacy parameter [DMIN, D--MAX as disclosed by Aravamudan] provided [released] by a privacy principal, or provided [released] by an administrator and published [released] through interface 640, or derived from a plurality of privacy principal entries, such as by using a most restrictive privacy parameter selected from a plurality of privacy parameters.”). 3 providing the noisy result and the utility indicator value to the entity (¶¶ [0052]-[0054], “Once a query is performed on said data associated with a plurality of privacy principals, the collective output from the query can be calculated, and a noise value from the calculated distribution can be added to the collective output to produce a noisy collective output [noisy result] 305. finally, the collective noisy output can be disclosed [provided] 306, as can the noise distribution.”; and “One interesting aspect of the invention is that it permits a useful backwards operation in which, for a given differential diameter and a known noise distribution, the value of the privacy parameter epsilon can be determined. Thus, systems using the invention that disclose a query and a noise distribution also verifiably disclose the value of the privacy parameter that was used. Privacy principals [entities] can thus be informed [of the utility indicator value] or calculate for themselves the degree of privacy [and complementary utility] that is being used with their data.”). Regarding the combination of Aravamudan and Dwork, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the differential privacy system of Aravamudan to arrive at the claimed invention. KSR establishes that a rationale for obviousness is proven by showing a “use of [a] known technique to improve similar devices in the same way.” See MPEP § 2143(I)(C). To substantiate the conclusion of obviousness under this KSR rationale, the Examiner finds pursuant to MPEP § 2143(I)(C): 1) the prior art contained a base system, namely the differential privacy system of Aravamudan, upon which the claimed invention can be seen as an “improvement” through the use of a parameter release feature; 2) the prior art contained a “comparable” system, namely the differential privacy system of Dwork, that has been improved in the same way as the claimed invention through the parameter release feature; and 3) one of ordinary skill in the art could have applied the known improvement technique of applying the differential privacy feature to the base differential privacy system of Aravamudan, and the results would have been predictable to one of ordinary skill in the art. Regarding Claim 9 Aravamudan in view of Dwork (“Aravamudan-Dwork”) discloses the system of claim 1, and Aravamudan further discloses wherein the distance represents one of an absolute distance or a relative distance between the noisy result and the computation result (Col. 16:58-17:17, “As a non-limiting example, verifying first distance measure 132 against distance range 136 may include verifying first distance measure 132 is greater than minimum threshold e.g., a minimum distance Dmin [as an absolute or relative distance], and is less than a maximum threshold e.g., a maximum distance Dmax, from at least a pre-determined number M of private data elements of plurality of private data elements 116.”). Regarding Claim 10 Aravamudan-Dwork discloses the system of claim 1, and Aravamudan further discloses wherein the utility indicator value is binary (Col. 19:26-61, “With continued reference to FIG. 1, as another non-limiting example, maximum threshold Dmax and pre-determined number M of private data elements of plurality of private data elements may be determined based on an obfuscation risk tolerance level (i.e., obfuscation parameter 148).”, i.e., the “maximum threshold” can assume binary values—either it is above the threshold and false or 0, or it is below the threshold and true or 1). Regarding Claim 11 Aravamudan-Dwork discloses the system of claim 3, and Aravamudan further discloses wherein the utility indicator has a first value if the distance satisfies a threshold value and a second value otherwise (Col. 19:26-61, “With continued reference to FIG. 1, as another non-limiting example, maximum threshold Dmax -[utility indicator] and pre-determined number M of private data elements of plurality of private data elements may be determined based on an obfuscation risk tolerance level (i.e., obfuscation parameter 148).”, i.e., if the distance satisfies the threshold value of Dmax -as the utility indicator, the binary value is true or 1 and false or 0 otherwise). Regarding Claim 12 Aravamudan-Dwork discloses the system of claim 4, and Aravamudan further discloses wherein the noisy result is labeled as usable if the corresponding utility indicator has the first value and the noisy result is labeled as non-usable if the corresponding utility indicator has the second value (Col. 16:58-17:17, “…while Dmax [utility indicator] may ensure obfuscated data elements do not deviate too much from private data elements [and are thereby usable] thereby preserving the utility for further processing steps as described below.”). Regarding Claim 13 Aravamudan-Dwork discloses the system of claim 1, and Dwork further discloses further comprising: applying a privacy preserving mechanism to the utility indicator value before release (Fig. 6, ¶¶ [0066]-[0068], “Demonstrations [as a privacy preserving mechanism] may be provided by 643 to assist in selecting an appropriate parameter [utility indicator value]. For example, a demonstration may be given of how to choose a privacy parameter [the value for DMAX] associated with an amount of privacy loss that is acceptable to a privacy principal.”). Regarding the combination of Aravamudan and Dwork, the rationale to combine is the same as provided for claim 1 due to the overlapping subject matter of claims 1 and 13. Regarding Independent Claim 15 and Dependent Claims 16-20 With respect to independent claim 15 and dependent claims 16-20, a corresponding reasoning as given earlier for independent claim 8 and dependent claims 9-13 applies, mutatis mutandis, to the subject matter of claims 15-20. Therefore, claims 15-20 are rejected, for similar reasons, under the grounds set forth for claims 8-13. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to D'ARCY WINSTON STRAUB whose telephone number is (303)297-4405. The examiner can normally be reached Monday-Friday 9:00-5:00 Mountain Time. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, WILLIAM KORZUCH can be reached at (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D'Arcy Winston Straub/Primary Examiner, Art Unit 2491
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Oct 16, 2025
Non-Final Rejection — §103
Jan 20, 2026
Response Filed
Feb 10, 2026
Final Rejection — §103
Apr 10, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591706
PROACTIVE DATA SECURITY USING FILE ACCESS PERMISSIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12579304
PURPOSE-BASED PROCESSING BY PURPOSE-ACTION ASSOCIATION
2y 5m to grant Granted Mar 17, 2026
Patent 12566886
DYNAMIC PROGRAMMING SOLUTION FOR PRIVACY PROTECTION EVALUATION
2y 5m to grant Granted Mar 03, 2026
Patent 12566887
Multi-Tiered Data Security and Auditing System
2y 5m to grant Granted Mar 03, 2026
Patent 12561410
SYSTEM AND METHOD TO PROVIDE DUMMY DATA FOR SOURCE ATTRIBUTION FOR PROPRIETARY DATA TRANSMISSION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
97%
With Interview (+20.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 218 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month