Prosecution Insights
Last updated: April 19, 2026
Application No. 18/314,428

DEEP LEARNING ARCHITECTURES FOR REDUCING INCIDENTAL TRUNCATION BIAS, AND SYSTEMS AND METHODS OF USE

Non-Final OA §101§103§112
Filed
May 09, 2023
Examiner
BADAWI, SHERIEF
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Optum Inc.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
69%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
114 granted / 197 resolved
+2.9% vs TC avg
Moderate +11% lift
Without
With
+10.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
14 currently pending
Career history
211
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
17.4%
-22.6% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 197 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action has been issued in response to Applicant’s Communication of application S/N 18/144,428 filed on May 9, 2023. Claims 1 to 20 are currently pending with the application. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4, 11 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4, 11 and 18 recites the limitation “substantially probative” and “not substantially probative” in line 1. The term “substantially” is indefinite and unclear as to the degree intended in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Per Step 1, claim 1 is directed to a system, claim 8 is directed to a method, and claim 15 is directed to a method, which are statutory categories of invention per Step 1. However, the claims are rejected under 35 U.S.C. 101 because they are directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application or are significantly more. Step 2A, Prong One asks: Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea? See MPEP 2106.04 Part I. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP 2106.04(a). With respect to claims 1, 8, and 15-17, the limitation of “a first deep neural network that is configured to predict a first decision for a multiple-sequential-event based outcome”, “a penultimate layer of the first deep neural network is configured to generate as output a prediction value for the first decision;”, “and a second deep neural network that is configured to predict a second decision subsequent to the first decision for the multiple-sequential-event based outcome” and “wherein the second deep neural network is configured to generate as output a prediction value for the second decision.”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, “Predicting” and “receiving” in the context of this claim encompasses the user mentally analyzing data. Similarly, the limitation of “generate as an output a prediction”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. At step 2a, prong two, this judicial exception is not integrated into a practical application. Claims 1, 8 and 15-17 recite a” one or more storage devices storing computer-readable instructions; one or more processors configured to execute the computer-readable instructions to implement”, however, this is recited as a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Additionally, the claim recites “a first input layer configured to receive as input less than an entirety of a plurality of variables for the multiple-sequential-event based outcome; and an internal layer configured to receive as input a remainder of the plurality of variables appended to an output of a preceding layer, such that a penultimate layer of the first deep neural network is configured to generate as output a prediction value for the first decision, a second input layer configured to receive as input a representation of output from the penultimate layer of the first deep neural network appended to at least a portion of the plurality of variables,” These elements do not integrate the abstract idea into a practical application because they do not impose a meaningful limit on the judicial exception and provide only insignificant extra solution activity that is mere data gathering in conjunction with the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. With respect to “a first input layer configured to receive as input less than an entirety of a plurality of variables for the multiple-sequential-event based outcome; and an internal layer configured to receive as input a remainder of the plurality of variables appended to an output of a preceding layer, such that a penultimate layer of the first deep neural network is configured to generate as output a prediction value for the first decision, a second input layer configured to receive as input a representation of output from the penultimate layer of the first deep neural network appended to at least a portion of the plurality of variables,” and “sending a digital invitation to the user”, the courts have found limitations directed towards data gathering to be well-understood, routine, and conventional. See MPEP 2106.05(d)(II). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). Considering the additional elements individually and in combination and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. The claim is not patent eligible. With respect to claims 2, 9 and 19, the limitations are directed towards “ the first and second deep neural networks are convolutional neural networks; and each of the input to the first layer and the input to the second layer is represented as an input image”. These additional elements merely confine the claim to a particular technological environment or field of use for data gathering in conjunction with the abstract idea. Therefore, claim 2 and 9, do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respect to claims 3 and 10, the limitations are directed towards “wherein the representation of the output from penultimate layer of the first deep neural network that is appended to the input of the second input layer acts as a selection variable and is in the form of a tensor appended to each layer of the input image for the second input layer.”. These additional elements merely confine the claim to a particular technological environment or field of use for data gathering in conjunction with the abstract idea. Therefore, claim 3 and 10, do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respect to claims 4, 11 and 18, the limitations are directed towards “wherein: the remainder of the plurality of variables includes one or more variables that are substantially probative of the first decision and that are not substantially probative of the second decision; and the remainder of the plurality of variables act as exclusion restrictions so as to inhibit multi-collinearity in the first and second deep neural networks.”. These additional elements merely confine the claim to a particular technological environment or field of use for data gathering in conjunction with the abstract idea. Therefore, claim 3 and 10, do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respect to claim 5, 12, the claims disclose “wherein the multiple-sequential-event based outcome models a situation in which occurrence of the second decision depends upon a result of the first decision.” The courts have found limitations directed towards data gathering to be well- understood, routine, and conventional. See MPEP 2106.05(d)(II). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). With respect to claim 6, 13 and 20, the limitations further define” wherein the first decision relates to whether a medical claim appeal will be filed, and the second decision relates to whether an outcome of the medical claim appeal will be a denial reversal.”. This describes further mental processes because under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, “algorithmically matching” in the context of this claim encompasses the user mentally determining if data matches. With respect to claims 7 and 14, the limitations further define” , wherein the first and second neural networks have been trained based on a common dataset of multiple sets of variables”. This describes further mental processes because under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, “trained based on common dataset ” in the context of this claim encompasses the user mentally determining if data matches. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 7- 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ganju et al. (US 2024/0129380) Published on April 18, 2024 in view of Paul et al. (US 2022/0335271) October 20, 2022 and further in view Breneisen et al. (11,129,577) Patented on September 28, 2021. As per Claims 1, and 8, A system for predicting multiple-sequential-event based outcomes, comprising: one or more storage devices storing computer-readable instructions; one or more processors configured to execute the computer-readable instructions to implement: a first deep neural network that is configured to predict a first decision for a multiple-sequential-event based outcome, and that includes: (See para.30, Para.35 and Para.41, The ML model 104 is described as a reinforcement learning model (including DQN) making sequential decisions to optimize conditions; as taught by Ganju) and an internal layer configured to receive as input a remainder of the plurality of variables appended to an output of a preceding layer, (See para.41, teaches that layers take previous layer outputs as inputs; as taught by Ganju) such that a penultimate layer of the first deep neural network is configured to generate as output a prediction value for the first decision; (See para.41, DNN structure producing decision output from upper layers is disclosed; as taught by Ganju) and a second deep neural network that is configured to predict a second decision subsequent to the first decision for the multiple-sequential-event based outcome, (See para.42, second ML model outputs target Q-values which is analogous to a second decision); wherein the second deep neural network is configured to generate as output a prediction value for the second decision.; (See para.42, a first machine learning model 104 is used to determine a prediction and a second machine learning model 104 is used to determine a target; as taught by Ganju) Ganju fails to teach a first input layer configured to receive as input less than an entirety of a plurality of variables for the multiple-sequential-event based outcome; and an internal layer configured to receive as input a remainder of the plurality of variables and that includes: a second input layer configured to receive as input a representation of output from the penultimate layer of the first deep neural network appended to at least a portion of the plurality of variables, On the other hand Paul teaches a first input layer configured to receive as input less than an entirety of a plurality of variables for the multiple-sequential-event based outcome; and an internal layer configured to receive as input a remainder of the plurality of variables; (See para.76 and 78, first input layer” refers to the aggregate first layer of the network (comprised of sub-LSTMs), that aggregate layer still receives all variables, just partitioned across sub-branches. If “first input layer” is read at the level of each sub-LSTM branch, then each branch’s input layer receives less than the entirety; as taught by Paul) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Ganju, by including the teachings of Paul relating to the splitting of variables in a Deep Learning module, the mapping improves the prediction by reducing bias (As taught by Paul para.4) The combination of Ganju and Paul fails to teach and that includes: a second input layer configured to receive as input a representation of output from the penultimate layer of the first deep neural network appended to at least a portion of the plurality of variables, On the other hand Breneisen teaches a second input layer configured to receive as input a representation of output from the penultimate layer of the first deep neural network appended to at least a portion of the plurality of variables, (See col 13, lines 13-25 and col. 14, lines 11-28; he second DNN (Critic) receives the output (decision/classification) of the first DNN; as taught by Breneisen) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Ganju and Paul, by including the teachings of Breneisen relating to the feeding of output from the first layer into a second layer, the mapping improves sensitivity and specificity through having multiple layers that are fed back the error metric (As taught by Breneisen col.14 lines 40-45) As per Claims 2 and 9. The system of claim 1, the combination for Ganju, Paul and Breneisen teach wherein: the first and second deep neural networks are convolutional neural networks; and each of the input to the first layer and the input to the second layer is represented as an input image; (See par.41, wherein CNN is disclosed, see para.130, wherein the input is an image; as taught by Ganju) As per Claims 3, and 10, The system of claim 2, the combination for Ganju, Paul and Breneisen teach wherein the representation of the output from penultimate layer of the first deep neural network that is appended to the input of the second input layer acts as a selection variable and is in the form of a tensor appended to each layer of the input image for the second input layer. (See para.158 describing the suage of Nvidia’s TENSOR RT which processes multidimensional tensor data; see para.79 teaching TensorFlow; as taught by Canju) As per Claims 4, and 11, The system of claim 1, the combination for Ganju, Paul and Breneisen teach wherein: the remainder of the plurality of variables includes one or more variables that are substantially probative of the first decision and that are not substantially probative of the second decision; and the remainder of the plurality of variables act as exclusion restrictions so as to inhibit multi-collinearity in the first and second deep neural networks; (See para.76 and para.78, wherein variables are divided and used across different layers in manner that is beneficial and in manner that removes correlation, such as past, current and future, see fig.11 wherein each layer includes n x the variable type; as taught by Paul) As per Claims 5, and 12, The system of claim 1, where combination for Ganju, Paul and Breneisen teach wherein the multiple-sequential-event based outcome models a situation in which occurrence of the second decision depends upon a result of the first decision.; (See para.30, Para.35 and Para.41, The ML model 104 is described as a reinforcement learning model (including DQN) making sequential decisions to optimize conditions; as taught by Ganju) As per Claims 7, and 14, The system of claim 1, where combination for Ganju, Paul and Breneisen teach wherein the first and second neural networks have been trained based on a common dataset of multiple sets of variables.; (See para.76 and para.78, wherein variables are divided and used across different layers in manner that is beneficial and in manner that removes correlation, such as past, current and future, see fig.11 wherein each layer includes n x the variable type; as taught by Paul) Claims 6, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Ganju et al. (US 2024/0129380) Published on April 18, 2024 in view of Paul et al. (US 2022/0335271) October 20, 2022 and in view Breneisen et al. (11,129,577) Patented on September 28, 2021 and further in Singh et al. (US Patent 11,538,112) Patented on Dec 27, 2022. As per Claims 6, and 13, The system of claim 1, the combination Ganju, Paul and Breneisen fail to teach of wherein the first decision relates to whether a medical claim appeal will be filed, and the second decision relates to whether an outcome of the medical claim appeal will be a denial reversal. On the other hand, Singh discloses wherein the first decision relates to whether a medical claim appeal will be filed, and the second decision relates to whether an outcome of the medical claim appeal will be a denial reversal. (See col. 2 lines 25-35, n the event a claim is denied by a payer, it would be advantageous for the provider to prioritize a plurality of denied insurance claims for appeal to one or more payers based on the predicted outcome of the appeal and/or the predicted value of the appeal. It would be advantageous for providers to obtain real-time, filterable analytics (in some implementations, automated notifications) based on data sources including providers outside of their own medical group, hospital, or health system.; as taught Singh) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Ganju, Paul and Breneisen, by including the teachings of Singh relating to the prediction of an outcome of an appeal to help settle or auto settle claims (As taught by Singh col.2 lines 35-40) Claims 15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ganju et al. (US 2024/0129380) Published on April 18, 2024 in view Breneisen et al. (11,129,577) Patented on September 28, 2021. As per Claim 15, A computer-implemented method of training a series of deep neural networks for predicting multiple-sequential-event based outcomes, comprising: a first subset of the plurality of training sets of variables including respective data describing only a first decision outcome, ( See par.33, par.44 and par.56, Describes different training targets and using different RL techniques for different conditions; as taught by Ganju) and a second subset of the plurality of training sets of variables including respective data describing the first decision outcome and a second decision outcome; ( See para.42 and para.45, multi-stage decisioning and multiple outputs in RL episodes; also discusses sampling replay buffers and multi-objective training; as taught by Ganju) ) training, by the one or more processors and using the plurality of training sets, a first neural network to generate predictions of the first decision; ( See para.33, para.35 and para.50, describes training ML/DQN models to predict locations/decisions (first decision) using collected thermal/energy training data and reinforcement updates; as taught by Ganju) Ganju fails to teach obtaining, by one or more processors, a plurality of training sets of variables, each training set including a respective plurality of variables and respective data describing decision outcomes for at least one decision, and training, by the one or more processors and using the second subset of the plurality of training sets, a second neural network to generate predictions of the second decision. On the other hand Breneisen discloses obtaining, by one or more processors, a plurality of training sets of variables, each training set including a respective plurality of variables and respective data describing decision outcomes for at least one decision (See col.12, lines 56-67 and col.13, lines 1-34, SD computation, sampling EA(f) into M samples and archiving samples in Archive 60; training data and corresponding BI-RADS labels are described for neural network training.; as taught by Breneisen) and training, by the one or more processors and using the second subset of the plurality of training sets, a second neural network to generate predictions of the second decision. (See col.13, lines 15-50 The Critic Network (Cycle 2) is trained using biopsy labels, i.e., a later network trained on a different/stronger label set to refine predictions — directly analogous to training a second NN using a subset containing second-decision outcomes; as taught by Breneisen) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Ganju, by including the teachings of Breneisen relating to the feeding of output from the first layer into a second layer, the mapping improves sensitivity and specificity through having multiple layers that are fed back the error metric (As taught by Breneisen col.14 lines 40-45) As per Claim 19, the combination for Ganju, Paul and Breneisen teach The computer-implemented method of claim 15, wherein the first and second deep neural networks are convolutional neural networks; (See par.41, wherein CNN is disclosed, as taught by Ganju) Claims 16, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ganju et al. (US 2024/0129380) Published on April 18, 2024 in view Breneisen et al. (11,129,577) Patented on September 28, 2021 and further in view of Paul et al. (US 2022/0335271) October 20, 2022 As per Claim 16. The computer-implemented method of claim 15, wherein training the first neural network using the plurality of training sets includes, for each training set: and adjusting the first neural network based on a comparison of an output of the first neural network with the respective data describing decision outcomes of the first decision; ( See para.40-42, para.45 and 35-39, RL/Q-learning and DQN weight/update equations, reward feedback and re-training steps (operations 325–330). Rationale: direct teaching of adjusting model based on reward/label comparison; as taught by Ganju) Ganju fails to teach identifying less than an entirety of the respective plurality of variables as first input; applying the first input to a first input layer of the first neural network; providing a remainder of the respective plurality of variables into a penultimate layer of the first neural network. On the other hand Paul teaches identifying less than an entirety of the respective plurality of variables as first input; ( See para.76-79 , explicitly teaches dividing input variables into data groups (past/current/future) and mapping each group to sub-LSTM sub-models. Rationale: this most directly supports “each branch receives less than the entirety” (branch-level interpretation); as taught Paul) applying the first input to a first input layer of the first neural network; (See para.76-78, sub-LSTM branches take their group inputs at their input nodes (sub-models per data group). Rationale: explicit branch input application; as taught by Paul) providing a remainder of the respective plurality of variables into a penultimate layer of the first neural network; (See fig.11 and para.76-79, explicitly teaches dividing input variables into data groups (past/current/future) and mapping each group to sub-LSTM sub-models. Rationale: this most directly supports “each branch receives less than the entirety” (branch-level interpretation); as taught Paul) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Ganju and Breneisen; by including the teachings of Paul relating to the splitting of variables in a Deep Learning module, the mapping improves the prediction by reducing bias (As taught by Paul para.4) As per Claim17, the combination Ganju, Breneisen and Paul teaches The computer-implemented method of claim 16, wherein training the second neural network to generate predictions of the second decision includes, for each training set in the second subset of the plurality of training sets: generating second input by appending output from the penultimate layer of the first neural network to the second subset of the plurality of training sets; ( See para.41-45 in Ganju, two networks in the DQN context (online and target networks) and the use of one network’s outputs as targets for training. Citation describes target Q-value estimation and training stability rather than forming a new input vector by concatenating a penultimate activation from a first network with raw variables for a separate second network, also see col.13, lines 15-60 in Breneisen describes a Feed-Forward network and a Critic network; the Critic “takes the output of the Feed Forward Neural Network… and ‘fine tunes’ it” and is trained in Cycle 2 using biopsy labels) applying the second input to the second neural network; (See co.13, lines 15-70, explicitly describes a second network (Critic Network 605) that receives input derived from the first network and is trained (Cycle 2). This matches the notion of applying an input to a second NN; as taught by Breneisen) and adjusting the second neural network based on a comparison of an output of the second neural network with the respective data describing decision outcomes of the second decision; (See para.35-42 and para.45, A gives explicit RL/DQN update equations, target networks, and reward feedback; training (adjusting) a second network based on target/reward comparisons is taught.; as taught by Ganju) As per Claim 18, The system of claim 1, the combination Ganju, Breneisen and Paul teaches The computer-implemented method of claim 17, wherein the remainder of the plurality of variables includes one or more variables that are substantially probative of the first decision and that are not substantially probative of the second decision; (See para.76 and para.78, wherein variables are divided and used across different layers in manner that is beneficial and in manner that removes correlation, such as past, current and future, see fig.11 wherein each layer includes n x the variable type; as taught by Paul) Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Ganju et al. (US 2024/0129380) Published on April 18, 2024 in view Breneisen et al. (11,129,577) Patented on September 28, 2021 and further in Singh et al. (US Patent 11,538,112) Patented on Dec 27, 2022. As per Claim 20, The system of claim 1, the combination Ganju, Breneisen fails to teach of wherein the first decision relates to whether a medical claim appeal will be filed, and the second decision relates to whether an outcome of the medical claim appeal will be a denial reversal. On the other hand, Singh discloses wherein the first decision relates to whether a medical claim appeal will be filed, and the second decision relates to whether an outcome of the medical claim appeal will be a denial reversal. (See col. 2 lines 25-35, n the event a claim is denied by a payer, it would be advantageous for the provider to prioritize a plurality of denied insurance claims for appeal to one or more payers based on the predicted outcome of the appeal and/or the predicted value of the appeal. It would be advantageous for providers to obtain real-time, filterable analytics (in some implementations, automated notifications) based on data sources including providers outside of their own medical group, hospital, or health system.; as taught Singh) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Ganju, and Breneisen, by including the teachings of Singh relating to the prediction of an outcome of an appeal to help settle or auto settle claims (As taught by Singh col.2 lines 35-40) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERIEF BADAWI whose telephone number is (571)272-9782. The examiner can normally be reached Monday - Friday, 8:00am - 5:30pm, Alt Friday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cordelia Zecher can be reached on 571-272-7771. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

May 09, 2023
Application Filed
Dec 31, 2025
Non-Final Rejection — §101, §103, §112
Mar 18, 2026
Applicant Interview (Telephonic)
Mar 26, 2026
Response Filed
Apr 03, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585404
STORAGE DEVICE PERFORMING COPY OPERATION IN BACKGROUND AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12566801
METHOD FOR FAST AND BETTER TREE SEARCH FOR REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12536152
SYSTEMS AND METHODS FOR ESTABLISHING AND ENFORCING RELATIONSHIPS BETWEEN ITEMS
2y 5m to grant Granted Jan 27, 2026
Patent 12399871
AUTOMATED PROGRAM GENERATOR FOR DATABASE OPERATIONS
2y 5m to grant Granted Aug 26, 2025
Patent 11080309
VALIDATING CLUSTER RESULTS
2y 5m to grant Granted Aug 03, 2021
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
69%
With Interview (+10.8%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 197 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month