Prosecution Insights
Last updated: April 19, 2026
Application No. 18/066,939

Differentially Private Synthetic Data

Final Rejection §103§112
Filed
Dec 15, 2022
Examiner
MAI, KEVIN S
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
Oracle International Corporation
OA Round
4 (Final)
29%
Grant Probability
At Risk
5-6
OA Rounds
5y 3m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
125 granted / 428 resolved
-28.8% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
39 currently pending
Career history
467
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office Action has been issued in response to Applicant's Amendment filed November 12, 2025. Claims 1, 8, and 15 have been amended. Claims 1, 2, 4-9, 11-16, and 18-20 have been examined and are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed November 12, 2025 have been fully considered but they are not persuasive. Applicant argues the references do not disclose estimating cost using the real data set. Paragraph [0045] of Pese discloses the minimum OEM privacy guarantee is subject to data accuracy requirements and is subtracted at each query from the privacy budget. This privacy guarantee is subtracted from the privacy budget and is accordingly the cost. Paragraph [0046] of Pese discloses where εOEM represents the minimum OEM privacy guarantee which is subject to a sensor accuracy requirement provided by the OEM. The privacy guarantee is subject to the sensor accuracy requirement and the sensor accuracy is representative of the real data set. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 2, 4-9, 11-16, and 18-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The claims recite “estimating, using the real data set, a privacy cost”. Examiner was unable to find discussion of using the real data with the privacy cost. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 5, 7-9, 11, 12, 14-16, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0137378 to LaTerza et al. (hereinafter “LaTerza”) and further in view of US Pub. No. 2020/0175193 to Pese et al. (hereinafter “Pese”). As to Claim 1, LaTerza discloses a computer-implemented method, comprising: training a generative model using a real data set, the real data set comprising a plurality of real data records and information identifying one or more sources of the plurality of real data records (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data. Paragraph [0039] of LaTerza discloses the true training data 210 may include one or more sets of labeled training data that has been generated, prepared and/or reviewed for training an ML model. As discussed above, because the labeled training data may include private information, it may not be prudent to use the training data directly); [estimating, using the real data set, a privacy cost of generating synthetic data samples using the trained generative model]; generating a synthetic data set according to the trained generative model, wherein the generated synthetic data set comprises a number of samples of computer-generated data records different from the real data records (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data); wherein [the number of samples is determined according to a privacy budget and the estimated privacy cost] to ensure differential privacy of the data in the real data set, and wherein the generated synthetic data set excludes the information identifying one or more sources of the real data records (Paragraph [0057] of LaTerza discloses to ensure that the synthetic training data generated by private synthetic training dataset preserves privacy at a required level, method 400 may proceed to perform a leakage analysis on the generated synthetic training data, at 440. Paragraph [0058] of LaTerza discloses the leakage analysis may involve analyzing the synthetic training data to ensure that the percentage of private data included in the synthetic training data does not exceed a given leakage threshold). LaTerza does not explicitly disclose estimating, using the real data set, a privacy cost of generating synthetic data samples using the trained generative model and the number of samples is determined according to a privacy budget and the estimated privacy cost. However, Pese discloses this. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Paragraph [0050] of Pese discloses the application samples calculation module 98 receives as input the OEM privacy budget data 108, and the application privacy budget data 110. The application samples calculation module 98 calculates the application samples and generates application samples data 112 based thereon. The application samples is the number of data points/samples that the third-party application is allowed to retrieve for the selected sensor. It is calculated using the number of allowed OEM data points. Paragraph [0045] of Pese discloses the minimum OEM privacy guarantee is subject to data accuracy requirements and is subtracted at each query from the privacy budget. Paragraph [0046] of Pese discloses where εOEM represents the minimum OEM privacy guarantee which is subject to a sensor accuracy requirement provided by the OEM. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the private data system as disclosed by LaTerza, with limiting the number of samples as disclosed by Pese. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device. LaTerza and Pese are directed toward private data system and as such it would be obvious to use the techniques of one in the other. LaTerza’s privacy would be improved by limiting samples as disclosed by Pese. As to Claim 2, LaTerza-Pese discloses the method of claim 1, further comprising: training a differentially-private machine learning model according to the sampled synthetic data (Paragraph [0058] of LaTerza discloses when it is determined that the synthetic training data meets the leakage threshold, method 400 may proceed to provide the synthetic training data for training the language classifier model, at 455). As to Claim 4, LaTerza-Pese discloses the computer-implemented method of claim 1, wherein the plurality of real data records are usable to train the generative model to make inferences, and wherein the plurality of computer-generated data records are usable to train another model to make the inferences (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data. Paragraph [0058] of LaTerza discloses when it is determined that the synthetic training data meets the leakage threshold, method 400 may proceed to provide the synthetic training data for training the language classifier model, at 455). As to Claim 5, LaTerza-Pese discloses the computer-implemented method of claim 1, further comprising: estimating a training sensitivity for the generative model according to the real data set; wherein sampling the generated synthetic data set to ensure differential privacy of the data in the real data set is performed according to the estimated training sensitivity (Paragraph [0057] of LaTerza discloses to ensure that the synthetic training data generated by private synthetic training dataset preserves privacy at a required level, method 400 may proceed to perform a leakage analysis on the generated synthetic training data, at 440. Paragraph [0058] of LaTerza discloses the leakage analysis may involve analyzing the synthetic training data to ensure that the percentage of private data included in the synthetic training data does not exceed a given leakage threshold. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Examiner recites the same rationale to combine used for claim 1. As to Claim 7, LaTerza-Pese discloses the computer-implemented method of claim 5, wherein a number of samples of the sampled generated synthetic data set is determined according to specified amount of differential privacy and the estimated training sensitivity (Paragraph [0050] of Pese discloses the application samples calculation module 98 receives as input the OEM privacy budget data 108, and the application privacy budget data 110. The application samples calculation module 98 calculates the application samples and generates application samples data 112 based thereon. The application samples is the number of data points/samples that the third-party application is allowed to retrieve for the selected sensor. It is calculated using the number of allowed OEM data points. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Examiner recites the same rationale to combine used for claim 1. As to Claim 8, LaTerza discloses one or more non-transitory computer-accessible storage media storing program instructions that when executed on or across one or more computing devices cause the one or more computing devices to implement: generating a differentially private data set, comprising: training a machine learning model to produce a generative model according to a real data set, the real data set comprising a plurality of real data records and information identifying one or more sources of the real data (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data. Paragraph [0039] of LaTerza discloses the true training data 210 may include one or more sets of labeled training data that has been generated, prepared and/or reviewed for training an ML model. As discussed above, because the labeled training data may include private information, it may not be prudent to use the training data directly); [estimating, using the real data set, a privacy cost of generating synthetic data samples using the trained generative model]; generating synthetic data according to the trained generative model, wherein the generated synthetic data comprises a number of samples of synthetic data records different from the real data records (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data); and wherein, [the number of samples is determined according to a privacy budget and the estimated privacy cost] to ensure differential privacy of the data in the real data set, and wherein the generated synthetic data excludes the information identifying one or more sources of the real data records (Paragraph [0057] of LaTerza discloses to ensure that the synthetic training data generated by private synthetic training dataset preserves privacy at a required level, method 400 may proceed to perform a leakage analysis on the generated synthetic training data, at 440. Paragraph [0058] of LaTerza discloses the leakage analysis may involve analyzing the synthetic training data to ensure that the percentage of private data included in the synthetic training data does not exceed a given leakage threshold). LaTerza does not explicitly disclose estimating a sensitivity using the real data set, the estimated sensitivity comprising an estimated privacy cost of generating synthetic data samples using the trained generative model and the number of samples is determined according to the estimated sensitivity. However, Pese discloses this. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Paragraph [0050] of Pese discloses the application samples calculation module 98 receives as input the OEM privacy budget data 108, and the application privacy budget data 110. The application samples calculation module 98 calculates the application samples and generates application samples data 112 based thereon. The application samples is the number of data points/samples that the third-party application is allowed to retrieve for the selected sensor. It is calculated using the number of allowed OEM data points. Paragraph [0045] of Pese discloses the minimum OEM privacy guarantee is subject to data accuracy requirements and is subtracted at each query from the privacy budget. Paragraph [0046] of Pese discloses where εOEM represents the minimum OEM privacy guarantee which is subject to a sensor accuracy requirement provided by the OEM. Examiner recites the same rationale to combine used for claim 1. As to Claim 9, LaTerza-Pese discloses the one or more non-transitory computer-accessible storage media of claim 8, further comprising: training a differentially-private machine learning model according to the sampled synthetic data (Paragraph [0058] of LaTerza discloses when it is determined that the synthetic training data meets the leakage threshold, method 400 may proceed to provide the synthetic training data for training the language classifier model, at 455). As to Claim 11, LaTerza-Pese discloses the one or more non-transitory computer-accessible storage media of claim 8, wherein the plurality of real data records are usable to train the generative model to make inferences, and wherein the plurality of computer-generated data records are usable to train another model to make the inferences (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data. Paragraph [0058] of LaTerza discloses when it is determined that the synthetic training data meets the leakage threshold, method 400 may proceed to provide the synthetic training data for training the language classifier model, at 455). As to Claim 12, LaTerza-Pese discloses the one or more non-transitory computer-accessible storage media of claim 8, further comprising: estimating a training sensitivity for the generative model according to the real data set; wherein sampling the generated synthetic data set to ensure differential privacy of the data in the real data set is performed according to the estimated training sensitivity (Paragraph [0057] of LaTerza discloses to ensure that the synthetic training data generated by private synthetic training dataset preserves privacy at a required level, method 400 may proceed to perform a leakage analysis on the generated synthetic training data, at 440. Paragraph [0058] of LaTerza discloses the leakage analysis may involve analyzing the synthetic training data to ensure that the percentage of private data included in the synthetic training data does not exceed a given leakage threshold. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Examiner recites the same rationale to combine used for claim 1. As to Claim 14, LaTerza-Pese discloses the one or more non-transitory computer-accessible storage media of claim 12, wherein a number of samples of the sampled generated synthetic data set is determined according to specified amount of differential privacy and the estimated training sensitivity (Paragraph [0050] of Pese discloses the application samples calculation module 98 receives as input the OEM privacy budget data 108, and the application privacy budget data 110. The application samples calculation module 98 calculates the application samples and generates application samples data 112 based thereon. The application samples is the number of data points/samples that the third-party application is allowed to retrieve for the selected sensor. It is calculated using the number of allowed OEM data points. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Examiner recites the same rationale to combine used for claim 1. As to Claim 15, LaTerza discloses a system, comprising: one or more processors; and a memory storing program instructions that when executed by the one or more processors cause the one or more processors to implement a differentially private data set generator, configured to: train a machine learning model to produce a generative model according to a real data set, the real data set comprising a plurality of real data records and information identifying one or more sources of the real data (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data. Paragraph [0039] of LaTerza discloses the true training data 210 may include one or more sets of labeled training data that has been generated, prepared and/or reviewed for training an ML model. As discussed above, because the labeled training data may include private information, it may not be prudent to use the training data directly); [estimate, using the real data set, a privacy cost of generating synthetic data samples using the trained generative model]; and generate synthetic data according to the trained generative model, wherein the generated synthetic data comprises a number of samples of synthetic data records different from the real data records (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data); wherein, [the number of samples is determined according to a privacy budget and the estimated privacy cost] to ensure differential privacy of the data in the real data set, and wherein the generated synthetic data excludes the information identifying one or more sources of the real data (Paragraph [0057] of LaTerza discloses to ensure that the synthetic training data generated by private synthetic training dataset preserves privacy at a required level, method 400 may proceed to perform a leakage analysis on the generated synthetic training data, at 440. Paragraph [0058] of LaTerza discloses the leakage analysis may involve analyzing the synthetic training data to ensure that the percentage of private data included in the synthetic training data does not exceed a given leakage threshold). LaTerza does not explicitly disclose estimating a sensitivity using the real data set, the estimated sensitivity comprising an estimated privacy cost of generating synthetic data samples using the trained generative model and the number of samples is determined according to the estimated sensitivity. However, Pese discloses this. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon. Paragraph [0050] of Pese discloses the application samples calculation module 98 receives as input the OEM privacy budget data 108, and the application privacy budget data 110. The application samples calculation module 98 calculates the application samples and generates application samples data 112 based thereon. The application samples is the number of data points/samples that the third-party application is allowed to retrieve for the selected sensor. It is calculated using the number of allowed OEM data points. Paragraph [0045] of Pese discloses the minimum OEM privacy guarantee is subject to data accuracy requirements and is subtracted at each query from the privacy budget. Paragraph [0046] of Pese discloses where εOEM represents the minimum OEM privacy guarantee which is subject to a sensor accuracy requirement provided by the OEM. Examiner recites the same rationale to combine used for claim 1. As to Claim 16, LaTerza-Pese discloses the system of claim 15, wherein the differentially private data set generator is configured to: train a differentially-private machine learning model according to the sampled synthetic data (Paragraph [0058] of LaTerza discloses when it is determined that the synthetic training data meets the leakage threshold, method 400 may proceed to provide the synthetic training data for training the language classifier model, at 455). As to Claim 18, LaTerza-Pese discloses the system of claim 15, wherein the plurality of real data records are usable to train the generative model to make inferences, and wherein the plurality of computer-generated data records are usable to train another model to make the inferences (Paragraph [0026] of LaTerza discloses the privacy preserving data generation model 112 may be an ML model trained for generating private synthetic training data from non-private true training data. Paragraph [0058] of LaTerza discloses when it is determined that the synthetic training data meets the leakage threshold, method 400 may proceed to provide the synthetic training data for training the language classifier model, at 455). As to Claim 19, LaTerza-Pese discloses the system of claim 15, wherein the differentially private data set generator is configured to: estimate a training sensitivity for the generative model according to the real data set; wherein sampling the generated synthetic data set to ensure differential privacy of the data in the real data set is performed according to the estimated training sensitivity (Paragraph [0057] of LaTerza discloses to ensure that the synthetic training data generated by private synthetic training dataset preserves privacy at a required level, method 400 may proceed to perform a leakage analysis on the generated synthetic training data, at 440. Paragraph [0058] of LaTerza discloses the leakage analysis may involve analyzing the synthetic training data to ensure that the percentage of private data included in the synthetic training data does not exceed a given leakage threshold. Paragraph [0047] of Pese discloses the application privacy budget calculation module 96 calculates an application specific privacy budget based on the privacy factor (PRF) and the trustworthiness score (TS) of the received data and generates application privacy budget data 110 based thereon). Examiner recites the same rationale to combine used for claim 1. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over LaTerza-Pese and further in view of US Pub. No. 2021/0216902 to Sutcher-Shepard et al. (hereinafter “Sutcher”). As to Claim 6, LaTerza-Pese discloses the computer-implemented method of claim 5. LaTerza-Pese does not explicitly discloses wherein the estimating is based at least in part on a Hessian of a loss function of the real data set. However, Sutcher discloses this. Paragraph [0045] of Sutcher discloses differentially private federated learning process can incorporate two assumptions. The Hessian of the loss function at the local minima to which the loss converges can be sufficiently similar. It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the private data creating system as disclosed by LaTerza, with using the Hessian as disclosed by Sutcher. One of ordinary skill in the art would have been motivated to combine to apply a known technique to a known device. LaTerza and Sutcher are directed toward differential privacy systems and as such it would be obvious to use the techniques of one in the other. As to Claim 13, LaTerza-Pese discloses the one or more non-transitory computer-accessible storage media of claim 12. LaTerza-Pese does not explicitly discloses wherein the estimating is based at least in part on a Hessian of a loss function of the real data set. However, Sutcher discloses this. Paragraph [0045] of Sutcher discloses differentially private federated learning process can incorporate two assumptions. The Hessian of the loss function at the local minima to which the loss converges can be sufficiently similar. Examiner recites the same rationale to combine used for claim 6. As to Claim 20, LaTerza-Pese discloses the system of claim 19. LaTerza-Pese does not explicitly discloses wherein the estimating is based at least in part on a Hessian of a loss function of the real data set. However, Sutcher discloses this. Paragraph [0045] of Sutcher discloses differentially private federated learning process can incorporate two assumptions. The Hessian of the loss function at the local minima to which the loss converges can be sufficiently similar. Examiner recites the same rationale to combine used for claim 6. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kevin S Mai whose telephone number is (571)270-5001. The examiner can normally be reached Monday to Friday 9AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip Chea can be reached on 5712723951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN S MAI/Primary Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Dec 15, 2022
Application Filed
Aug 28, 2024
Non-Final Rejection — §103, §112
Dec 03, 2024
Response Filed
Jan 07, 2025
Final Rejection — §103, §112
Mar 13, 2025
Response after Non-Final Action
Apr 14, 2025
Request for Continued Examination
Apr 22, 2025
Response after Non-Final Action
Aug 09, 2025
Non-Final Rejection — §103, §112
Nov 12, 2025
Response Filed
Mar 03, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506731
Conference Data Sharing Method and Conference Data Sharing System Capable of Communicating with Remote Conference Members
2y 5m to grant Granted Dec 23, 2025
Patent 12413610
ASSESSING SECURITY OF SERVICE PROVIDER COMPUTING SYSTEMS
2y 5m to grant Granted Sep 09, 2025
Patent 12406064
PRE-BOOT CONTEXT-BASED SECURITY MITIGATION
2y 5m to grant Granted Sep 02, 2025
Patent 12363200
PROVIDING EVENT STREAMS AND ANALYTICS FOR ACTIVITY ON WEB SITES
2y 5m to grant Granted Jul 15, 2025
Patent 12204570
SYSTEM AND METHOD FOR PROVIDING MESSAGE CONTENT BASED ROUTING
2y 5m to grant Granted Jan 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
29%
Grant Probability
55%
With Interview (+25.5%)
5y 3m
Median Time to Grant
High
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month