Prosecution Insights
Last updated: April 19, 2026
Application No. 18/300,093

SYSTEMS AND METHODS FOR OPTIMIZING A MACHINE LEARNING MODEL BASED ON A PARITY METRIC

Non-Final OA §101§102§103
Filed
Apr 13, 2023
Examiner
GRUSZKA, DANIEL PATRICK
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Arize AI Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-22 are within the four statutory (a process, machine, manufacture or composition of matter.) Claims 1-11 describe a machine and 12-22 describes a process. With respect to claim 1: Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG determine a sensitive bias metric for the slice based on a sensitive group, (This is an abstract idea of a "Mental Process." The "determine" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determining could be done manually by an individual.) determine a base metric for the slice based on a base group, (This is an abstract idea of a "Mental Process." The "determine" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determining could be done manually by an individual.) determine a parity metric for the slice based on a ratio of the sensitive bias metric and the base metric, and (This is an abstract idea of a "Mental Process." The "determine" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The determining could be done manually by an individual.) Step 2A Prong 2: The judicial exception is not integrated into a practical application Additional elements: a machine learning model that generates predictions based on at least one input feature vector, each input feature vector having one or more vector values; and (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) an optimization module with a processor and an associated memory, the optimization module being configured to: (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) create at least one slice of the predictions based on at least one vector value, (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). optimize the machine learning model based on the parity bias metric. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements “a machine learning model…”, “an optimization module”, and “optimize the machine learning model…” are recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)). The additional element “create at least one slice…” adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). When considered in combination, these additional elements represent insignificant extra-solution activity and mere instructions to apply an expectation, which do not provide an inventive concept. Therefore, claim 1 is ineligible. With respect to claim 2: Step 2A Prong 1: claim 2, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is Recall parity. (this is an abstract idea of a “mathematical concept”. The recited “Recall parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 2 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 2 does not recite an additional element. Therefore, claim 2 is ineligible. With respect to claim 3: Step 2A Prong 1: claim 3, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is False Positive Rate (FPR) parity. (this is an abstract idea of a “mathematical concept”. The recited “False Positive Rate (FPR) parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 3 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 3 does not recite an additional element. Therefore, claim 3 is ineligible. With respect to claim 4: Step 2A Prong 1: claim 4, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is Disparate Impact (DI). (this is an abstract idea of a “mathematical concept”. The recited “Disparate Impact (DI).” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 4 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 4 does not recite an additional element. Therefore, claim 4 is ineligible. With respect to claim 5: Step 2A Prong 1: claim 5, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is False Negative Rate (FNR) parity. (this is an abstract idea of a “mathematical concept”. The recited “False Negative Rate (FNR) parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 5 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 5 does not recite an additional element. Therefore, claim 5 is ineligible. With respect to claim 6: Step 2A Prong 1: claim 6, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is False Positive/Group Size (FP/GS) parity. (this is an abstract idea of a “mathematical concept”. The recited “False Positive/Group Size (FP/GS) parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 6 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 6 does not recite an additional element. Therefore, claim 6 is ineligible. With respect to claim 7: Step 2A Prong 1: claim 7, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is False Negative/Group Size (FN/GS) parity. (this is an abstract idea of a “mathematical concept”. The recited “False Negative/Group Size (FN/GS) parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 7 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 7 does not recite an additional element. Therefore, claim 7 is ineligible. With respect to claim 8: Step 2A Prong 1: claim 8, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is Accuracy parity. (this is an abstract idea of a “mathematical concept”. The recited “Accuracy parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 8 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 8 does not recite an additional element. Therefore, claim 8 is ineligible. With respect to claim 9: Step 2A Prong 1: claim 9, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is Proportional parity. (this is an abstract idea of a “mathematical concept”. The recited “Proportional parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 9 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 9 does not recite an additional element. Therefore, claim 9 is ineligible. With respect to claim 10: Step 2A Prong 1: claim 2, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is False Omission Rate (FOR) parity. (this is an abstract idea of a “mathematical concept”. The recited “False Omission Rate (FOR) parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 10 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 10 does not recite an additional element. Therefore, claim 10 is ineligible. With respect to claim 11: Step 2A Prong 1: claim 11, which incorporates the rejection of claim 1, recites an additional abstract idea: the parity metric is False Discovery Rate (FDR) parity. (this is an abstract idea of a “mathematical concept”. The recited “False Discovery Rate (FDR) parity” represents a mathematical operation that would fall under the “mathematical concepts” grouping.) Step 2A Prong 2: claim 11 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 11 does not recite an additional element. Therefore, claim 11 is ineligible. With respect to claim 12: The claim recites similar limitations as corresponding to claim 1. Therefore, the same subject matter analysis that was utilized for claim 1, as described above, is equally applicable to claim 12. Therefore, claim 12 is ineligible. With respect to claim 13: The claim recites similar limitations as corresponding to claim 2. Therefore, the same subject matter analysis that was utilized for claim 2, as described above, is equally applicable to claim 13. Therefore, claim 13 is ineligible. With respect to claim 14: The claim recites similar limitations as corresponding to claim 3. Therefore, the same subject matter analysis that was utilized for claim 3, as described above, is equally applicable to claim 14. Therefore, claim 14 is ineligible. With respect to claim 15: The claim recites similar limitations as corresponding to claim 4. Therefore, the same subject matter analysis that was utilized for claim 4, as described above, is equally applicable to claim 15. Therefore, claim 15 is ineligible. With respect to claim 16: The claim recites similar limitations as corresponding to claim 5. Therefore, the same subject matter analysis that was utilized for claim 5, as described above, is equally applicable to claim 16. Therefore, claim 16 is ineligible. With respect to claim 17: The claim recites similar limitations as corresponding to claim 6. Therefore, the same subject matter analysis that was utilized for claim 6, as described above, is equally applicable to claim 17. Therefore, claim 17 is ineligible. With respect to claim 18: The claim recites similar limitations as corresponding to claim 7. Therefore, the same subject matter analysis that was utilized for claim 7, as described above, is equally applicable to claim 18. Therefore, claim 18 is ineligible. With respect to claim 19: The claim recites similar limitations as corresponding to claim 8. Therefore, the same subject matter analysis that was utilized for claim 8, as described above, is equally applicable to claim 19. Therefore, claim 19 is ineligible. With respect to claim 20: The claim recites similar limitations as corresponding to claim 9. Therefore, the same subject matter analysis that was utilized for claim 9, as described above, is equally applicable to claim 20. Therefore, claim 20 is ineligible. With respect to claim 21: The claim recites similar limitations as corresponding to claim 10. Therefore, the same subject matter analysis that was utilized for claim 10, as described above, is equally applicable to claim 21. Therefore, claim 21 is ineligible. With respect to claim 22: The claim recites similar limitations as corresponding to claim 11. Therefore, the same subject matter analysis that was utilized for claim 11, as described above, is equally applicable to claim 22. Therefore, claim 22 is ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 8-10, 12-16, & 19-21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang (US 2021/0224605 A1). Regarding claim 1, Zhang teaches: A system for optimizing a machine learning model, the system comprising: ([0006] “An advantage of such a system can be that the machine learning model can be jointly optimized based on multiple fairness metrics.”) a machine learning model that generates predictions based on at least one input feature vector, each input feature vector having one or more vector values; and ([0039] “In one or more embodiments, the model component 108 can evaluate one or more machine learning models at a plurality of threshold settings to generate a sample set and/or define a relationship between one or more fairness metrics and utility metrics of the machine learning model based on the sample set” and [0040]-[0041]) an optimization module with a processor and an associated memory, the optimization module being configured to: (Fig. 5 Optimization component) create at least one slice of the predictions based on at least one vector value, (Paragraphs [0040]-[0042] describes creating the groups (slices) [0042] “Equation 2 can indicate that the true positive rates can be the same across any two groups represented by the given data, while Equation 3 can indicate that the false positive rates can be the same. Example metrics for measuring deviations from the separation criterion can include, but are not limited to: average odds difference, a combination thereof, and/or the like.”) determine a sensitive bias metric for the slice based on a sensitive group, ([0042] “Example metrics for measuring deviations from the separation criterion can include, but are not limited to: average odds difference, a combination thereof, and/or the like. For instance, average odds difference can average the difference in true positive rates and false positive rates between the privileged data group and the unprivileged data group [similar to sensitive group].”) determine a base metric for the slice based on a base group, ([0042] “Example metrics for measuring deviations from the separation criterion can include, but are not limited to: average odds difference, a combination thereof, and/or the like. For instance, average odds difference can average the difference in true positive rates and false positive rates between the privileged data group [similar to base group] and the unprivileged data group.”) determine a parity metric for the slice based on a ratio of the sensitive bias metric and the base metric, and ([0040] “fairness criteria of a machine learning model can be divided into three categories: independence, separation, and/or sufficiency. The independence criterion can require that all analyzed groups represented by the given data receive equal rate of favorable treatment by the machine learning model. This roughly corresponds to the notion of equity, wherein each data group, regardless on its context, can receive support to have an equal outcome. The separation criterion can require that the false positive rates and the false negative rates are similar across all groups represented by the given data. This roughly corresponds to the notion of equality, which entails that every data group is supported at the same level. Lastly, sufficiency can require that a machine learning classifier assigns a score to each data group that accurately captures the impact of all data attributes, even if that means some groups represented by the given data receive, on average, lower scores. With many classifiers, if the machine learning model is well trained on a large dataset and has high accuracy, the sufficiency criterion is automatically satisfied because they predict probabilities that are well calibrated to a data group's true probability.”) optimize the machine learning model based on the parity bias metric. ([0004] “implement one or more fairness policies to optimize one or more machine learning models are described.”) Regarding claim 2, Zhang teaches: the parity metric is Recall parity. ([0042] “For instance, average odds difference can average the difference in true positive rates and false positive rates between the privileged data group and the unprivileged data group.”) Regarding claim 3, Zhang teaches: the parity metric is False Positive Rate (FPR) parity. ([0040] “The separation criterion can require that the false positive rates and the false negative rates are similar across all groups represented by the given data.”) Regarding claim 4, Zhang teaches: the parity metric is Disparate Impact (DI). ([0041] “Example metrics that can relate to this notion of fairness can include, but are not limited to: statistical parity difference, disparate impact ratio, a combination thereof, and/or the like.”) Regarding claim 5, Zhang teaches: the parity metric is False Negative Rate (FNR) parity. ([0040] “The separation criterion can require that the false positive rates and the false negative rates are similar across all groups represented by the given data.”) Regarding claim 8, Zhang teaches: the parity metric is Accuracy parity. ([0040] “With many classifiers, if the machine learning model is well trained on a large dataset and has high accuracy, the sufficiency criterion is automatically satisfied because they predict probabilities that are well calibrated to a data group's true probability.” ) Regarding claim 9, Zhang teaches: the parity metric is Proportional parity. ([0041] “Example metrics that can relate to this notion of fairness can include, but are not limited to: statistical parity difference, disparate impact ratio, a combination thereof, and/or the like” disparate impact ratio is consider a proportional parity metric.) Regarding claim 10, Zhang teaches: the parity metric is False Omission Rate (FOR) parity. ([0042] “In various embodiments, example metrics for measuring deviations from the independence criterion and/or separation criterion can include, but are not limited to: statistical parity difference, disparate impact ratio, average odds difference, entropy index, average absolute odds difference, coefficient of variation, theil index, binary confusion matrix, equal opportunity difference, error rate, false negative rate, false omission rate, false positive rate, true negative rate, true positive rate, negative predictive value, positive predictive value, number of false negatives, number of false positives, number of true negatives, number of true positives, performance measures, selection rate, a combination thereof, and/or the like.”) Regarding claim 12, Zhang teaches: A computer-implemented method for optimizing a machine learning model, the method comprising: ([0006] “An advantage of such a system can be that the machine learning model can be jointly optimized based on multiple fairness metrics.”) obtaining multiple predictions from a machine learning model, the predictions being based on at least one input feature vector, each input feature vector having one or more vector values; ([0039] “In one or more embodiments, the model component 108 can evaluate one or more machine learning models at a plurality of threshold settings to generate a sample set and/or define a relationship between one or more fairness metrics and utility metrics of the machine learning model based on the sample set.” creating at least one slice of the predictions based on at least one vector value; (Paragraphs [0040]-[0042] describes creating the groups (slices) [0042] “Equation 2 can indicate that the true positive rates can be the same across any two groups represented by the given data, while Equation 3 can indicate that the false positive rates can be the same. Example metrics for measuring deviations from the separation criterion can include, but are not limited to: average odds difference, a combination thereof, and/or the like.”) determining a sensitive bias metric for the slice based on a sensitive group; ([0042] “Example metrics for measuring deviations from the separation criterion can include, but are not limited to: average odds difference, a combination thereof, and/or the like. For instance, average odds difference can average the difference in true positive rates and false positive rates between the privileged data group and the unprivileged data group [similar to sensitive group].”) determining a base metric for the slice based on a base group; ([0042] “Example metrics for measuring deviations from the separation criterion can include, but are not limited to: average odds difference, a combination thereof, and/or the like. For instance, average odds difference can average the difference in true positive rates and false positive rates between the privileged data group [similar to base group] and the unprivileged data group.”) determining a parity metric for the slice based on a ratio of the sensitive bias metric and the base metric; and ([0040] “fairness criteria of a machine learning model can be divided into three categories: independence, separation, and/or sufficiency. The independence criterion can require that all analyzed groups represented by the given data receive equal rate of favorable treatment by the machine learning model. This roughly corresponds to the notion of equity, wherein each data group, regardless on its context, can receive support to have an equal outcome. The separation criterion can require that the false positive rates and the false negative rates are similar across all groups represented by the given data. This roughly corresponds to the notion of equality, which entails that every data group is supported at the same level. Lastly, sufficiency can require that a machine learning classifier assigns a score to each data group that accurately captures the impact of all data attributes, even if that means some groups represented by the given data receive, on average, lower scores. With many classifiers, if the machine learning model is well trained on a large dataset and has high accuracy, the sufficiency criterion is automatically satisfied because they predict probabilities that are well calibrated to a data group's true probability.”) optimizing the machine learning model based on the parity metric. ([0004] “implement one or more fairness policies to optimize one or more machine learning models are described.”) Regarding claim 13, Zhang teaches: the parity metric is Recall parity. ([0042] “For instance, average odds difference can average the difference in true positive rates and false positive rates between the privileged data group and the unprivileged data group.”) Regarding claim 14, Zhang teaches: the parity metric is False Positive Rate (FPR) parity. ([0040] “The separation criterion can require that the false positive rates and the false negative rates are similar across all groups represented by the given data.”) Regarding claim 15, Zhang teaches: the parity metric is Disparate Impact (DI). ([0041] “Example metrics that can relate to this notion of fairness can include, but are not limited to: statistical parity difference, disparate impact ratio, a combination thereof, and/or the like.”) Regarding claim 16, Zhang teaches: the parity metric is False Negative Rate (FNR) parity. ([0040] “The separation criterion can require that the false positive rates and the false negative rates are similar across all groups represented by the given data.”) Regarding claim 19, Zhang teaches: the parity metric is Accuracy parity. ([0040] “With many classifiers, if the machine learning model is well trained on a large dataset and has high accuracy, the sufficiency criterion is automatically satisfied because they predict probabilities that are well calibrated to a data group's true probability.” ) Regarding claim 20, Zhang teaches: the parity metric is Proportional parity. ([0041] “Example metrics that can relate to this notion of fairness can include, but are not limited to: statistical parity difference, disparate impact ratio, a combination thereof, and/or the like” disparate impact ratio is consider a proportional parity metric.) Regarding claim 21, Zhang teaches: the parity metric is False Omission Rate (FOR) parity. ([0042] “In various embodiments, example metrics for measuring deviations from the independence criterion and/or separation criterion can include, but are not limited to: statistical parity difference, disparate impact ratio, average odds difference, entropy index, average absolute odds difference, coefficient of variation, theil index, binary confusion matrix, equal opportunity difference, error rate, false negative rate, false omission rate, false positive rate, true negative rate, true positive rate, negative predictive value, positive predictive value, number of false negatives, number of false positives, number of true negatives, number of true positives, performance measures, selection rate, a combination thereof, and/or the like.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-7, 11, 17-18, & 22 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Lamba (NPL: ‘An Empirical Comparison of Bias Reduction method on Real-World Problems in High-Stakes Policy Settings’). Regarding claim 6, Zhang teaches claim 1 as outlined above. Zhang does not teach: the parity metric is False Positive/Group Size (FP/GS) parity However, Lamba does (Figure 2. False Positives Adjusted to Group Size). Zhang and Lamba are considered analogous art to the claimed invention because they are in the same field of endeavor being machine learning fairness. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the overall system and method of Zhang with the metric of Lamba. One would want to do this to be able to apply different metrics and evaluate the fairness of the machine learning model better. Regarding claim 7, Zhang teaches claim 1 as outlined above. Lamba further teaches: the parity metric is False Negative/Group Size (FN/GS) parity. (Figure 2. False Negatives Adjusted to Group Size). Regarding claim 11, Zhang teaches claim 1 as outlined above. Lamba further teaches: the parity metric is False Discovery Rate (FDR) parity. (Figure 2. False Discovery Rate). Regarding claim 17, Zhang teaches claim 12 as outlined above. Zhang does not teach: the parity metric is False Positive/Group Size (FP/GS) parity However, Lamba does (Figure 2. False Positives Adjusted to Group Size). Zhang and Lamba are considered analogous art to the claimed invention because they are in the same field of endeavor being machine learning fairness. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the overall system and method of Zhang with the metric of Lamba. One would want to do this to be able to apply different metrics and evaluate the fairness of the machine learning model better. Regarding claim 18, Zhang teaches claim 13 as outlined above. Lamba further teaches: the parity metric is False Negative/Group Size (FN/GS) parity. (Figure 2. False Negatives Adjusted to Group Size). Regarding claim 22, Zhang teaches claim 12 as outlined above. Lamba further teaches: the parity metric is False Discovery Rate (FDR) parity. (Figure 2. False Discovery Rate). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL PATRICK GRUSZKA whose telephone number is (571)272-5259. The examiner can normally be reached M-F 9:00 AM - 6:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL GRUSZKA/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Apr 13, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §101, §102, §103
Feb 19, 2026
Interview Requested
Feb 26, 2026
Examiner Interview Summary
Feb 26, 2026
Applicant Interview (Telephonic)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month