Prosecution Insights
Last updated: April 19, 2026
Application No. 19/020,616

MACHINE LEARNING-BASED PREDICTIVE ANALYTICS FOR REFERRAL DIAGNOSES

Non-Final OA §101§103
Filed
Jan 14, 2025
Examiner
LE, LINH GIANG
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Matrixcare Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
61%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
444 granted / 675 resolved
+13.8% vs TC avg
Minimal -5% lift
Without
With
+-5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
19 currently pending
Career history
694
Total Applications
across all art units

Statute-Specific Performance

§101
33.5%
-6.5% vs TC avg
§103
30.3%
-9.7% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 675 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant This communication is in response application filed 1/14/2025. It is noted that application claims benefit to Provisional Application No. 63/621,702 filed 1/17/2024. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-8 and 9-15 are drawn to a method for using machine learning to guide health care service referral and treatment, which is within the four statutory categories (i.e. process). Claims 16-20 are drawn to an computer program product for using machine learning to guide health care service referral and treatment, which is within the four statutory categories (i.e. article of manufacture). Representative independent claim 1 includes limitations that recite at least one abstract idea. Specifically, independent claim 1 recites: A method, comprising: accessing a first patient referral of a first patient to a first healthcare service; determining, based on the first patient referral, a first referral condition of the first patient; generating, using a first machine learning model, a first prediction indicating one or more referral outcomes based on the first referral condition; and facilitating acceptance of the first patient referral to the first healthcare service based on the first prediction. These recited underlined limitations fall within the "Certain Methods of Organizing Human Activities" grouping of abstract ideas as it relates to managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (see MPEP § 2106.04(a)(2), subsection II). The limitations of accessing, determining, generating a prediction, facilitating acceptance/declination, indicating a second service, and outputting modifications as drafted and detailed above, are steps that, under its broadest reasonable interpretation, recites steps for organizing human interactions. The claimed invention is a method that allows for triaging referrals, routing patients to services, acceptance/declination decisions, and recommending “modifications to the healthcare service.” These limitations are directed to administrative workflow management in a healthcare setting. This is a method of resource/workflow management and coordination of care operations, thus falling into one category of abstract idea (managing personal behavior or relationships or interactions between people). That is other than reciting “machine learning” language, nothing in the claim element precludes the steps from practically being performed between people or by a person. If a claim limitation, under its broadest reasonable interpretation, covers interactions between people or managing personal behavior or relationships then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. In the present case, the additional limitations beyond the above-noted at least one abstract idea are as follows (where the bolded portions are the “additional limitations” while the underlined portions continue to represent the at least one “abstract idea”): A method, comprising: accessing a first patient referral of a first patient to a first healthcare service; determining, based on the first patient referral, a first referral condition of the first patient; generating, using a first machine learning model, a first prediction indicating one or more referral outcomes based on the first referral condition; and facilitating acceptance of the first patient referral to the first healthcare service based on the first prediction. For the following reasons, the Examiner submits that the above identified additional limitations do not integrate the above-noted at least one abstract idea into a practical application. The additional elements (i.e. the limitations not identified as part of the abstract idea) amount to no more than limitations which: amount to mere instructions to apply an exception, see MPEP 2106.05(f). the recitation of using machine learning to indicate referral outcomes recites only the idea of a solution or outcome (i.e. claim fails to recite details of how a solution to a problem is accomplished). in order to transform a judicial exception into a patent-eligible application, the additional element or combination of elements must do "‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’". Examiner submits that these limitations amount to merely using software to tailor information and provide it to the user on a generic computer. Claim 1 only recites the training of the model, however recites the training in a generic manner. Applicant does not provide adequate evidence or technical reasoning on how the process improves the efficiency of the computer and is beyond conventional use of components, as opposed to the efficiency of the process, or of any other technological aspect of the computer. Thus, taken alone, the additional elements do not integrate the at least one abstract idea into a practical application. Independent claim 1 does not include additional elements that are sufficient to amount to “significantly more” than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception and generally linking the abstract idea to a particular technological environment or field of use and the same analysis applies with regards to whether they amount to “significantly more.” Therefore, the additional elements do not add significantly more to the at least one abstract idea. As per claims 9 and 16, the claims teach limitations similar to claim 1 and the same abstract idea (“certain methods of organizing human activity”) for the same reasons as stated above. Claim 16 further teaches computer readable media comprising computer-executable instructions, when executed by one or more processors perform the functionality taught by claim 1. These limitations of a processor and computer readable media as generally recited, amount to mere instructions to apply an exception, see MPEP 2106.05(f) and generally link the abstract idea to a particular technological environment or field of use, see MPEP 2106.05(h). Independent claims 9 and 16 are directed to an abstract idea. Furthermore, for similar reasons as representative independent claim 1, analogous independent claims 9 and 16 do not recite additional elements that integrate the judicial exception into a practical application nor add significantly more. The following dependent claims further the define the abstract idea or are also directed to an abstract idea itself: Dependent claims 4, 12, 13 and 20 further define the at least one abstract idea (and thus fail to make the abstract idea any less abstract). In relation to claims 2, 6-8, 17, 19 these claims specify processing a referral condition to indicate a probability; which is a mental process as it is an evaluation that can, at the currently claimed high level of generality, be practically performed in the human mind. In relation to claims 3, 5, 10, 11, 14-15, and 18 these claims specify accessing demographics/generating a prediction; facilitating declination of a referral; updating parameters; which are certain methods of organizing human activity, under its broadest reasonable interpretation, covers interactions between people or managing personal behavior or relationships The remaining dependent claim limitations not addressed above fail to integrate the abstract idea into a practical application as set forth below: The dependent claims further do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application. Therefore, claims 1-20 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cha (2020/0005900) in view of Roots (2019/0043606) As per claim 1, Cha teaches a method, comprising: accessing a first patient referral of a first patient to a first healthcare service (Cha; paras. [0046] the system may run scheduled queries or processes to pull raw patient data from the data sources; [0161] data sources 450 may comprise any system that stores electronic patient data) patient data in the stored electronic patient data reads on “a first patient referral”; determining, based on the first patient referral, a first referral condition of the first patient (Cha; para. [0025] determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features); generating, using a first machine learning model, a first prediction indicating one or more referral outcomes based on the first referral condition (Cha; para. [0025] employ machine learning models to determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features); and Cha does not expressly teach facilitating acceptance of the first patient referral to the first healthcare service based on the first prediction. Cha teaches executing patient workflows based on determined risk scores. Roots paras. [0027], [0052] teaches scoring and classifying and a recommender system that presents optimal provider matches (i.e. routing and recommending a service or provider to the patient reads on “facilitating acceptance of the first patient referral…”). Both Cha and Roots use conventional electronic healthcare data conventional machine learning predictive engines. It would have been obvious to one of ordinary skill in the art to integrate Roots’ matching workflow/facilitating acceptance into Cha’s prediction output system because in the combination each element merely would have performed the same function as it did separately, one of ordinary skill in the art would have recognized the results of the combination were predictable. As per claim 2, Cha teaches the method of claim 1, wherein: generating the first prediction comprises processing the first referral condition using the first machine learning model, and the first prediction indicates, for each respective referral outcome of the one or more referral outcomes, a respective probability that the first patient will have the respective referral outcome if the first patient referral is accepted by the first healthcare service (Cha; para. [0104] a composite score or any numerical value that indicates the likelihood that a patient associated with the category will experience a particular outcome). As per claim 3, Cha teaches the method of claim 2, further comprising accessing a set of patient demographics for the first patient, wherein generating the first prediction is based further on processing the set of patient demographics using the first machine learning model (Cha; para. [0231] A “Baseline Model” was created to include features relating to age, sex, race, baseline eGFR strata…). As per claim 4, Cha teaches the method of claim 1, wherein the one or more referral outcomes comprise at least one of: (i) a prediction of whether the first patient will recover from the first referral condition, (ii) a recovery timeline predicting a length of time until the first patient recovers from the first referral condition, (iii) a prediction of whether the first patient will be hospitalized while being treated by the first healthcare service, or (iv) a prediction of whether the first patient will become septic while being treated by the first healthcare service (Cha; para. [0009] determine predictive features for a particular outcome, and/or determine the likelihood that particular patients will experience the outcome within one or more timeframes). Although Cha does not expressly teach becoming septic as the adverse outcome, this is an obvious variant of Cha’s claimed method. Sepsis is a known adverse outcome target in health care. It would have been obvious to one of ordinary skill in the art to modify the Cha teachings as the results of the combination were predictable. As per the teachings of claim 5 -- A method of claim 1, further comprising: accessing a second patient referral to the first healthcare service; determining, based on the second patient referral, a second referral condition of a second patient corresponding to the second patient referral; generating, using the first machine learning model, a second prediction indicating one or more referral outcomes based on the second referral condition; and facilitating declination of the second patient referral to the first healthcare service based on the second prediction. Claim 5 teaches substantially similar limitations as claim 1 and the reasons for rejection are incorporated herein. Cha further teaches generating risk predictions for a patient using a machine learning model based on electronic patient data and using the resulting prediction outputs to execute patient workflows (Cha; Abstract). Furthermore, Cha para. [0036] teaches a number of data sources (reads on “second patient referral”). It would have been obvious to apply Cha’s risk prediction framework to a second patient referral (i.e., electronic patient data for another patient) because Cha’s techniques are applicable to repeated processing of multiple patient records. It would have been obvious to “facilitate declination” of the second patient referral to the first healthcare service based on the prediction because Roots teaches evaluating suitability and recommending/matching patients to appropriate providers/service. It would have been obvious to use Cha’s predicted outcomes/risks as the basis for such a declination decision because in the combination each element merely would have performed the same function as it did separately, one of ordinary skill in the art would have recognized the results of the combination were predictable. As per the teachings of claim 6 – The method of claim 5, further comprising: generating, using a second machine learning model, a third prediction indicating one or more referral outcomes based on the second referral condition, wherein the second machine learning model was trained for a second healthcare service; and indicating the second healthcare service in response to determining that the third prediction satisfies one or more criteria, as compared to the second prediction. Claim 6 teaches substantially similar limitations as claim 1 and the reasons for rejection are incorporated herein. Cha further teaches generating risk predictions for a patient using a machine learning model based on electronic patient data and using the resulting prediction outputs to execute patient workflows (Cha; Abstract). Furthermore, Cha para. [0036] teaches a number of data sources (reads on “second patient referral”). It would have been obvious to generate a prediction for a second healthcare service using a second machine learning model trained for that service because Cha teaches training predictive models on labeled outcome data and deploying trained models, and it is a routine design choice to train separate models for different operating environments to account for differing data distributions and improve predictive accuracy. It would have been obvious to indicate the second healthcare service when its prediction satisfies criteria compared to another service’s prediction because Roots teaches evaluating multiple candidate providers/services and recommending/indicating an optimal provider/service based on predictive scoring and criteria. It would have been obvious to use Cha’s predicted outcomes/risks as the basis for such a declination decision because in the combination each element merely would have performed the same function as it did separately, one of ordinary skill in the art would have recognized the results of the combination were predictable. As per the teachings of claim 7 – The method of claim 5, further comprising: generating, using the first machine learning model, a third prediction indicating one or more referral outcomes based on the second referral condition, wherein the third prediction is generated by providing an indication of a second healthcare service as input to the first machine learning model; and indicating the second healthcare service in response to determining that the third prediction satisfies one or more criteria, as compared to the second prediction. Claim 7 teaches substantially similar limitations as claim 1 and the reasons for rejection are incorporated herein. Cha further teaches generating risk predictions for a patient using a machine learning model based on electronic patient data and using the resulting prediction outputs to execute patient workflows (Cha; Abstract). Furthermore, Cha para. [0036] teaches a number of data sources (reads on “second patient referral”). It would have been obvious to generate a “third prediction” by providing an indication of a second healthcare service as an input to the machine learning model because adding a service identifier as an additional input feature is a known and predictable alternative to maintaining separate models, and Cha’s trained model framework permits inclusion of additional input variables/features. It would have been obvious to indicate the second healthcare service based on criteria comparing the service-conditioned prediction to another prediction because Roots teaches selecting and indicating the recommended provider/service by comparing candidate scores/outputs against criteria to choose an optimal match. It would have been obvious to use Cha’s predicted outcomes/risks as the basis for such a declination decision because in the combination each element merely would have performed the same function as it did separately, one of ordinary skill in the art would have recognized the results of the combination were predictable. As per claim 8, Cha in view of Roots do not expressly teach -- The method of claim 5, further comprising: determining one or more modifications to the first healthcare service that, if implemented, would improve predicted referral outcomes for the second referral condition; and outputting an indication of the one or more modifications. However, the teachings are an obvious variant of Cha in view of Roots. It would have been obvious to determine and output one or more “modifications” to the first healthcare service that would improve predicted outcomes because, once a system produces predicted outcome metrics per Cha, it is a predictable extension of decision support to recommend operational adjustments intended to improve those predicted metrics, and Roots teaches optimization logic aimed at achieving improved results by choosing the best option among alternatives. Accordingly, recommending modifications based on predicted outcomes is an obvious use of the combined teachings of Cha in view of Roots. As per claim 9, Cha teaches a method, comprising: accessing a first set of patient referrals to a first healthcare service, each respective patient referral indicating a respective referral condition (Cha; paras. [0046] the system may run scheduled queries or processes to pull raw patient data from the data sources; [0161] data sources 450 may comprise any system that stores electronic patient data) patient data in the stored electronic patient data reads on “a first patient referral”; determining first outcome data comprising (Cha; para. [0025] determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features); training a first machine learning model to predict one or more referral outcomes based on the first set of patient referrals and the first outcome data (Cha; para. [0025], [0092], [0093] employ machine learning models to determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features); and deploying the first machine learning model to process new patient referrals (Cha; para. [0025] employ machine learning models to determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features). Cha does not expressly teach outcome data comprising for each respective patient referral of the first set of patient referrals, one or more respective referral outcomes of a corresponding patient after transitioning to the first healthcare service and deploy the machine learning model to process the new patient referral to the first healthcare service. However, this is an obvious variant of the Cha in view of Roots teachings. Roots’ teaches a patient-provider matching system (Roots; paras. [0017], [0018]). It would have been an obvious variant in the health care analytics field to modify the Roots patient-provider matching system to associate outcomes with a service actually used. One of ordinary skill in the art would have been motivated to modify the Roots teaching to build the most accurate machine learning model. As per claim 10, Cha teaches the method of claim 9, wherein training the first machine learning model comprises, for a first patient referral of the first set of patient referrals: generating a predicted referral outcome based on processing a first referral condition of the first patient referral using the first machine learning model; determining a difference between the predicted referral outcome and a first referral outcome of the first patient referral; and updating one or more parameters of the first machine learning model based on the difference (Cha; para. [0025], [0092], [0093] employ machine learning models to determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features). Cha teaches machine predictive analytics based on historical data to identify likelihood of future outcomes. Training machine learning models involves minimizing prediction error (difference between predicted and actual outcomes) and updating model parameters/weights accordingly. As per claim 11, Cha teaches the method of claim 10, further comprising accessing a set of patient demographics for a first patient indicated by the first patient referral, wherein generating the predicted referral outcome is based further on processing the set of patient demographics using the first machine learning model (Cha; paras. [0046] the system may run scheduled queries or processes to pull raw patient data from the data sources; [0161] data sources 450 may comprise any system that stores electronic patient data). Demographics are a conventional subset of electronic patient data features. As per claim 12, Cha teaches the method of claim 9, wherein the first outcome data comprises, for each respective patient referral, at least one of: (i) a respective indication of whether the corresponding patient recovered from a respective referral condition, (ii) a respective recovery timeline indicating a length of time until the corresponding patient recovered from a respective referral condition, or (iii) a respective indication of whether the corresponding patient was hospitalized while being treated by the first healthcare service. Cha para. [0025] teaches determining predictive features for patient outcomes. Selecting specific outcomes is an obvious configuration of target variables within Cha’s outcome prediction framework. Furthermore, claim 12 merely specifies the intended use of the “outcome data” by identifying clinical outcome categories (recovery, recovery timeline, hospitalization). The claim does not recite any specific technique for modeling these outcomes. Therefore, the recited outcome categories are non-limiting statements of intended result or field of use and do not impose a meaningful limitation on the claimed training of the machine learning model. As per claim 13, Cha does not expressly teach the method of claim 9, wherein training the first machine learning model to predict one or more referral outcomes comprises training the first machine learning model to predict a plurality of referral outcomes. However, this is an obvious variant of the Cha teachings. Cha; paras. [0025], [0092], [0093] teaches a system/method to employ machine learning models to determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features. Modifying the Cha teachings to predict one or more referral outcomes an obvious variation for clinical decision support given Cha’s outcome prediction framework. As per claim 14, Cha does not expressly teach the method of claim 9, further comprising :accessing a second set of patient referrals to a second healthcare service; determining second outcome data for each respective patient referral of the second set of patient referrals; and training a second machine learning model to predict one or more referral outcomes based on the second set of patient referrals and the second outcome data. However, this is an obvious variant of the Cha teachings. Cha; paras. [0025], [0092], [0093] teaches a system/method to employ machine learning models to determine predictive features for a given patient outcome from electronic patient data and/or to determine the importance of such features. Modifying the Cha teachings to predict one or more referral outcomes an obvious variation for clinical decision support given Cha’s outcome prediction framework. As per the teachings of claim 15 -- The method of claim 9, further comprising: accessing a second set of patient referrals to a second healthcare service; determining second outcome data for each respective patient referral of the second set of patient referrals; and training the first machine learning model to predict one or more referral outcomes based on the second set of patient referrals and the second outcome data while using an indication of the second healthcare service as input to the first machine learning model. Claim 15 teaches substantially similar limitations as claim 9 and the reasons for rejection are incorporated herein. Cha para. [0036] teaches a number of data sources (reads on “second patient referral” and “second healthcare service”). It would have been obvious to apply Cha’s risk prediction framework based on the second set of patient referrals and the second outcome data (i.e., electronic patient data for another patient) because Cha’s techniques are applicable to repeated processing of multiple patient records. It would have been obvious to train the machine learning model to predict one or more referral outcomes based on the second patient referral to the first healthcare service based on the prediction because Roots teaches evaluating suitability and recommending/matching patients to appropriate providers/service. It would have been obvious to use Cha’s predicted outcomes/risks as the basis for such a declination decision because in the combination each element merely would have performed the same function as it did separately, one of ordinary skill in the art would have recognized the results of the combination were predictable. Because adding a service identifier as an additional input feature is a known and predictable alternative to maintaining separate models, and Cha’s trained model framework permits inclusion of additional input variables/features. It would have been obvious to indicate the second healthcare service based on criteria comparing the service-conditioned prediction to another prediction because Roots teaches selecting and indicating the recommended provider/service by comparing candidate scores/outputs against criteria to choose an optimal match. It would have been obvious to use Cha’s predicted outcomes/risks as the basis for such a declination decision because in the combination each element merely would have performed the same function as it did separately, one of ordinary skill in the art would have recognized the results of the combination were predictable. Claims 16-20 repeat substantially similar limitations as claims 1-2 and 6-8 but in the form of a non-transitory computer-readable media and the reasons for rejection are incorporated herein. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Redlus (CA3095006A1) the closest foreign prior art of record teaches a method that provides patients with clinician referrals based on health assessment from users. The method applies a trained model to the feature vector to generate a list of candidate treating clinicians who have optimally treated patients whose clinician selection characteristics and determined health characteristics correlate with the health assessment from the user. Abdel-Hafez (Abdel-Hafez A, Jones M, Ebrahimabadi M, Ryan C, Graham S, Slee N and Whitfield B. "Artificial intelligence in medical referrals triage based on Clinical Prioritization Criteria." Front. Digit. Health 5:1192975. doi: 10.3389/fdgth.2023.1192975. (2023)) the closest non-patent literature of record teaches artificial intelligence in medical referrals triage based on Clinical Prioritization Criteria. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH GIANG MICHELLE LE whose telephone number is (571)272-8207. The examiner can normally be reached Mon- Fri 8:30am - 5:30pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JASON DUNHAM can be reached at 571-272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LINH GIANG "MICHELLE" LE PRIMARY EXAMINER Art Unit 3686 /LINH GIANG LE/Primary Examiner, Art Unit 3686 2/12/2026
Read full office action

Prosecution Timeline

Jan 14, 2025
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597522
METHOD AND SYSTEM FOR MANAGING PRESSURE ULCERS AND COMPUTING DEVICE FOR EXECUTING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12580066
ARTIFICIAL INTELLIGENCE SYSTEM ON AN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573501
SMART-PORT MULTIFUNCTIONAL READER/IDENTIFIER IN A PRODUCT STERILIZATION CYCLE
2y 5m to grant Granted Mar 10, 2026
Patent 12567484
AUTOMATIC MEDICAL DEVICE PATIENT REGISTRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12548650
METHODS AND SYSTEMS FOR PERFORMING DOSE TITRATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
61%
With Interview (-5.2%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 675 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month