DETAILED ACTION
Notices to Applicant
This communication is a Non-Final Office Action on the merits. Claims 1-17 as filed 02/17/2026, are currently pending and have been considered below.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/17/2026 has been entered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is: “an error detection module” in claims 1 and 11
Because this claim limitation is being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this limitation interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation recites sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
The present Application Specification links the structure of the “engine includes a module (808) to prevent and detect errors, incorporating robust guardrails, which is explained in detail in Figure 9,” and an error detection module triages potential errors through either automated verification or human verification.” See [0094], [0098]. Accordingly, the module is interpreted as being linked to the structure of the engine for performing the claimed functions.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
Claims 1-10 are drawn to a method for submitting an authorization request, which is within the four statutory categories (i.e. method).
Independent Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites (additional elements bolded):
1. A method for submitting an authorization request, the method comprising:
(a) authenticating a first user on a platform via a secure communications interface of a computing device;
(b) creating or selecting, by the first user, a service request for a second user via a provider-side user interface;
(c) retrieving, by an engine of the platform, requirements for the service request according to at least one policy of a second user, from a policy database, wherein the at least one policy is disaggregated into disambiguated criteria including inclusion criteria, exclusion criteria, and exception rules, and assigning or concatenating criteria with logical operators, and storing the disambiguated criteria in the form of checklists of criteria that are machine comprehensible, wherein the disambiguated criteria are stored in the platform's database and are retrieved by the engine to generate a list of coverage criteria including inclusion decision logic, and wherein the engine applies the disambiguated criteria to EHR data as inputs and returns lists of policy criteria matched with patient data;
(d) retrieving, by the platform, via APIs and direct integration, and from at least one electronic health record (EHR) system, data that is pertinent to evaluating the at least one policy, wherein retrieving the data comprises reading and retrieving individual records, including clinical notes, medical history, lab reports, and imaging results, independently of file format and structure;
(e) matching, by the engine, each disambiguated criterion against the retrieved data, including the individual records, to generate an evaluation including, for each disambiguated criterion: (i) a binary result marked as met or unmet for the disambiguated criterion, and (ii) source information identifying where the disambiguated criterion is found in the EHR data;
(f) pre-filling patient information into a prior authorization request form by the engine based on the binary result, wherein, as supporting evidence, the engine includes both a summary of each data point and an attachment with a full EHR document corresponding to the individual record;
(g) generating, using a trained machine learning model hosted on the platform, a prediction of a likelihood that the service request will be denied or approved based on the retrieved data, generating the prediction includes applying at least one guardrail that filters or flags an output of the trained machine learning model based on at least one of: protected health information; an inconsistency or inaccuracy in the retrieved data or in the output; or biased language, wherein generating the prediction comprises performing a stratified cohort analysis over longitudinal medical records by comparing a target patient profile to records of patients that underwent similar care pathways, wherein the stratified cohort analysis stratifies by at least two of diagnoses, comorbidities, prior conservative therapies, and demographics, and computes an outcome metric comprising at least one of a 90-day readmission rate or care plan adherence, and determining the likelihood based at least in part on the computed outcome metric;
(h) determining, by the platform, whether the at least one policy and the data needs to be updated based on changes in longitudinal patient data or policy revision history;
(i) upon determining the at least one policy and the data needs to be updated, automatically updating, by the platform, the at least one policy and the data including retrieving latest updated policies from a payer policy repository and repeating disaggregation of each updated policy into the disambiguated criteria;
(j) performing, by the platform, a final automated evaluation of the service request to flag missing information that may result in denial;
(k) after flagging the missing information, receiving, from the first user, changes to information in the request form and/or additional documentation for the service request and repeating at least steps (d)-(j);
(l) performing, by an error detection module, error triage for the evaluation and the prediction generated by the platform, wherein the error detection module triages potential errors through: (1) automated verification using rule-based detection mechanisms and AI-based algorithms to detect at least one of hallucinated information, inconsistencies or inaccuracies, or incorrectly encoded logical expressions in an output of the engine or the trained machine learning model, and (2) routing a flagged output to human verification, wherein confirmed errors are logged for use in adjusting the platform's guardrails and/or a subsequent model iteration; and
(m) upon confirmed submission, transmitting the service request and the prediction to the second user via a secure connection channeled through a communication network.
The claim limitations, as drafted, is a method that, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people through rules or instructions but for the recitation of generic computer components. That is, other than reciting the above bolded language, for example “on a platform via a secure communications interface of a computing device,” “via a provider-side user interface,” “by an engine of the platform,” “from a policy database,” “by the platform, via APIs and direct integration, and from at least one electronic health record (EHR) system,” “using a trained machine learning model hosted on the platform,” “a payer policy repository,” “by an error detection module,” and “transmitting … via a secure connection channeled through a communication network,” nothing in the claim precludes the steps from managing personal behavior or interactions between people through rules or instructions. For example, but for the above bolded language, (a) authenticating a first user; (b) creating or selecting, by the first user, a service request for a second user; (c) retrieving requirements for the service request according to at least one policy of a second user, wherein the at least one policy is disaggregated into disambiguated criteria including inclusion criteria, exclusion criteria, and exception rules, and assigning or concatenating criteria with logical operators, and storing the disambiguated criteria in the form of checklists of criteria that are machine comprehensible to generate a list of coverage criteria including inclusion decision logic, and apply the disambiguated criteria to EHR data as inputs and returns lists of policy criteria matched with patient data; (d) retrieving data that is pertinent to evaluating the at least one policy, wherein retrieving the data comprises reading and retrieving individual records, including clinical notes, medical history, lab reports, and imaging results, independently of file format and structure; (e) matching each disambiguated criterion against the retrieved data, including the individual records, to generate an evaluation including, for each disambiguated criterion: (i) a binary result marked as met or unmet for the disambiguated criterion, and (ii) source information identifying where the disambiguated criterion is found in the EHR data; (f) pre-filling patient information into a prior authorization request form based on the binary result, wherein, as supporting evidence, includes both a summary of each data point and an attachment with a full EHR document corresponding to the individual record; (g) generating a prediction of a likelihood that the service request will be denied or approved based on the retrieved data, generating the prediction includes applying at least one guardrail that filters or flags an output based on at least one of: protected health information; an inconsistency or inaccuracy in the retrieved data or in the output; or biased language, wherein generating the prediction comprises performing a stratified cohort analysis over longitudinal medical records by comparing a target patient profile to records of patients that underwent similar care pathways, wherein the stratified cohort analysis stratifies by at least two of diagnoses, comorbidities, prior conservative therapies, and demographics, and computes an outcome metric comprising at least one of a 90-day readmission rate or care plan adherence, and determining the likelihood based at least in part on the computed outcome metric; (h) determining whether the at least one policy and the data needs to be updated based on changes in longitudinal patient data or policy revision history; (i) upon determining the at least one policy and the data needs to be updated, automatically updating the at least one policy and the data including retrieving latest updated policies from a payer policy repository and repeating disaggregation of each updated policy into the disambiguated criteria; (j) performing a final automated evaluation of the service request to flag missing information that may result in denial; (k) after flagging the missing information, receiving, from the first user, changes to information in the request form and/or additional documentation for the service request and repeating at least steps (d)-(j); (l) performing error triage for the evaluation and the prediction generated, wherein triages potential errors through: (1) automated verification using rule-based detection mechanisms and AI-based algorithms to detect at least one of hallucinated information, inconsistencies or inaccuracies, or incorrectly encoded logical expressions in an output, and (2) routing a flagged output to human verification, wherein confirmed errors are logged for use in adjusting the platform's guardrails and/or a subsequent model iteration in the context of the claim encompasses rules or instructions for managing personal behavior or interactions between people for submitting an authorization request based on user and policy data. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people through rules or instructions but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Further, at least claim limitations (a), (e), (g), (h), (j), and (l), recite steps that may practically be performed in the mind but for the recitation of generic computer components, such that the claim is also directed to the abstract idea of “Mental Processes.” Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites the above bolded additional elements, for example “on a platform via a secure communications interface of a computing device,” “via a provider-side user interface,” “by an engine of the platform,” “from a policy database,” “by the platform, via APIs and direct integration, and from at least one electronic health record (EHR) system,” “using a trained machine learning model hosted on the platform,” “a payer policy repository,” “by an error detection module,” and “transmitting … via a secure connection channeled through a communication network,” to perform the claim limitations. The additional elements in each of the steps are recited at a high-level of generality (i.e., a platform computer system including an engine/central processing unit, a memory, and an interconnect bus and platform engine/AI engine, and any suitable database system; and a communication network such as the internet; an EHR system and database with API exposed by web browser, and a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification [0026], [0078], [0081], [0088], [00104]-[00106], [00110], [00113])). As such, the limitations amount to no more than mere instructions to implement an abstract idea on a computer or other machinery in tis ordinary capacity, or merely uses a computer other machinery in tis ordinary capacity as a tool to perform an abstract idea. See MPEP 2106.05(f)(2). Further, the additional element of transmitting via a secure connection channeled through a communications network amounts to mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the above bolded additional elements, for example “on a platform via a secure communications interface of a computing device,” “via a provider-side user interface,” “by an engine of the platform,” “from a policy database,” “by the platform, via APIs and direct integration, and from at least one electronic health record (EHR) system,” “using a trained machine learning model hosted on the platform,” “a payer policy repository,” “by an error detection module,” and “transmitting … via a secure connection channeled through a communication network,” to perform the claim limitations amounts to no more than mere instructions to apply the exception using generic computer components (i.e., a platform computer system including an engine/central processing unit, a memory, and an interconnect bus and platform engine/AI engine, and any suitable database system; and a communication network such as the internet; an EHR system and database with API exposed by web browser, and a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification [0026], [0078], [0081], [0088], [00104]-[00106], [00110], [00113])). Mere instructions to apply an exception using a computer or other machinery in tis ordinary capacity, or merely uses a computer other machinery in tis ordinary capacity as a tool to perform an abstract idea cannot provide an inventive concept. See MPEP 2106.05(f)(2). Further, the additional element of transmitting via a secure connection channeled through a communication network amounts to receiving or transmitting data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The claim is not patent eligible.
Dependent claims 2-10 include limitations of the independent claim and are directed to the same abstract idea as discussed above and incorporated herein. The dependent claims are rejected under 35 U.S.C. § 101 because they are directed to non-statutory subject matter. These additional claims recite what the data is and how it is analyzed. These information characteristics do not integrate the judicial exception into a practical application, and, when viewed individually or as a whole, they do not add anything substantial beyond the abstract idea(s). Dependent claims 2-3 recite the additional element of “a computing device” and claim 6 recites “a remote computing device,” (i.e. a mobile computing device, Application Specification at [00108]) and “a secure connection channel to send the data through a communication network,” as recited in claim 3 (i.e. input/output interfaces for communication and/or a transceiver for data communications via the network such as any suitable devices for linking communication, Application Specification at [00106]). As such, these claims recite limitations that amount to no more than mere instructions to apply an exception using a computer or other machinery in tis ordinary capacity, or merely uses a computer other machinery in tis ordinary capacity as a tool to perform an abstract idea, which cannot provide an inventive concept. See MPEP 2106.05(f)(2). Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore the dependent claims are rejected under 35 U.S.C. § 101.
Claims 11-17 are drawn to a method for reviewing an authorization request, which is within the four statutory categories (i.e. method).
Independent Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 11 recites (additional elements bolded):
11. A method for reviewing an authorization request, the method comprising:
(a) authenticating a first user on a platform via a secure communications interface of a computer device;
(b) determining whether a service request requires a decision from the first user;
(c) upon determining the service request requires the decision by the first user, retrieving by an engine, at least one policy of the first user for the service request from a policy database, wherein the at least one policy is disaggregated into disambiguated criteria including inclusion criteria, exclusion criteria, and exception rules, and assigning or concatenating criteria with logical operators, and storing the disambiguated criteria in the form of checklists of criteria that are machine comprehensible, wherein the engine applies the disambiguated criteria to EHR data as inputs and returns lists of policy criteria matched with patient data;
(d) retrieving, via APIs and direct integration, data associated with the service request, including patient profile data retrieved from an EHR system;
(e) matching, by the engine, each disambiguated criterion against the retrieved data, including the individual records, to generate an evaluation including, for each disambiguated criterion: (i) a binary result marked as met or unmet for the disambiguated criterion, and (ii) source information identifying where the disambiguated criterion is found in the EHR data;
(f) pre-filling patient information into a prior authorization request form by the engine based on the binary result, wherein, as supporting evidence, the engine includes both a summary of each data point and an attachment with a full EHR document corresponding to the individual record
(g) determining whether the at least one policy and the data needs to be updated based on changes in longitudinal patient data or policy revision history;
(h) upon determining the at least one policy and the data needs to be updated, automatically updating, by the platform, the at least one policy and the data, including retrieving latest updated policies from a payer policy repository and repeating disaggregation of each updating policy into the disambiguated criteria;
(i) evaluating, using a trained machine learning model trained on longitudinal patient data, the at least one policy against data supplied with the service request, wherein the evaluating includes applying at least one guardrail that filters or flags an output of the trained machine learning model based on at least one of: protected health information; an inconsistency or inaccuracy in the data associated with the service request or in the output; or biased language, wherein evaluating comprises performing a stratified cohort analysis over longitudinal medical records by comparing a target patient profile to records of patients that underwent similar care pathways, wherein the stratified cohort analysis stratifies by at least two of diagnoses, comorbidities, prior conservative therapies, and demographics, and computes an outcome metric comprising at least one of a 90-day readmission rate or care plan adherence, and generating the evaluation based at least in part on the computed outcome metric;
(g) generating, based on the evaluation, a recommendation on whether to approve or deny the service request, the recommendation including a supporting rationale and optionally suggesting one or more alternative care pathways;
(k) performing, by an error detection module, error triage for the evaluation and the recommendation generated by the platform, wherein the error detection module triages potential errors through: (1) automated verification using rule-based detection mechanisms and AI-based algorithms to detect at least one of hallucinated information, inconsistencies or inaccuracies, or incorrectly encoded logical expressions in an output of the engine or the trained machine learning model, and (2) routing a flagged output to human verification, wherein confirmed errors are logged for use in adjusting the platform's guardrails and/or a subsequent model iteration;
(l) sending the recommendation to a second user, via a secure connection channeled through a communications network
The claim limitations, as drafted, is a method that, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people through rules or instructions but for the recitation of generic computer components. That is, other than reciting the above bolded language, for example “a platform via a secure communications interface of a computer device,” “by an engine,” “from a policy database,” “via APIs and direct integration,” “from an EHR system,” “a payer policy repository,” “using a machine learning model trained,” “an error detection module,” and sending “via a secure connection channeled through a communication network,” nothing in the claim precludes the steps from managing personal behavior or interactions between people through rules or instructions. For example, but for the above bolded language, (a) authenticating a first user on; (b) determining whether a service request requires a decision from the first user; (c) upon determining the service request requires the decision by the first user, retrieving at least one policy of the first user for the service request, wherein the at least one policy is disaggregated into disambiguated criteria including inclusion criteria, exclusion criteria, and exception rules, and assigning or concatenating criteria with logical operators, and storing the disambiguated criteria in the form of checklists of criteria that are machine comprehensible, apply the disambiguated criteria to EHR data as inputs and returns lists of policy criteria matched with patient data; (d) retrieving data associated with the service request, including patient profile data; (e) matching each disambiguated criterion against the retrieved data, including the individual records, to generate an evaluation including, for each disambiguated criterion: (i) a binary result marked as met or unmet for the disambiguated criterion, and (ii) source information identifying where the disambiguated criterion is found in the EHR data; (f) pre-filling patient information into a prior authorization request form based on the binary result, wherein, as supporting evidence, the engine includes both a summary of each data point and an attachment with a full EHR document corresponding to the individual record (g) determining whether the at least one policy and the data needs to be updated based on changes in longitudinal patient data or policy revision history; (h) upon determining the at least one policy and the data needs to be updated, automatically updating, the at least one policy and the data, including retrieving latest updated policies and repeating disaggregation of each updating policy into the disambiguated criteria; (i) evaluating the at least one policy against data supplied with the service request, wherein the evaluating includes applying at least one guardrail that filters or flags an output of the trained machine learning model based on at least one of: protected health information; an inconsistency or inaccuracy in the data associated with the service request or in the output; or biased language, wherein evaluating comprises performing a stratified cohort analysis over longitudinal medical records by comparing a target patient profile to records of patients that underwent similar care pathways, wherein the stratified cohort analysis stratifies by at least two of diagnoses, comorbidities, prior conservative therapies, and demographics, and computes an outcome metric comprising at least one of a 90-day readmission rate or care plan adherence, and generating the evaluation based at least in part on the computed outcome metric; (g) generating, based on the evaluation, a recommendation on whether to approve or deny the service request, the recommendation including a supporting rationale and optionally suggesting one or more alternative care pathways; (k) performing error triage for the evaluation and the recommendation generated, wherein triages potential errors through: (1) automated verification using rule-based detection mechanisms and AI-based algorithms to detect at least one of hallucinated information, inconsistencies or inaccuracies, or incorrectly encoded logical expressions in an output, and (2) routing a flagged output to human verification, wherein confirmed errors are logged for use in adjusting the platform's guardrails and/or a subsequent model iteration in the context of the claim encompasses rules or instructions for managing personal behavior or interactions between people for submitting an authorization request based on user and policy data. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people through rules or instructions but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Further, at least claim limitations (a), (b), (e), (d), (i), (j) and (k), recite steps that may practically be performed in the mind but for the recitation of generic computer components, such that the claim is also directed to the abstract idea of “Mental Processes.” Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites the above bolded additional elements, for example “a platform via a secure communications interface of a computer device,” “by an engine,” “from a policy database,” “via APIs and direct integration,” “from an EHR system,” “a payer policy repository,” “using a machine learning model trained,” “an error detection module,” and sending “via a secure connection channeled through a communication network,” to perform the claim limitations. The additional elements in each of the steps are recited at a high-level of generality (i.e., a platform computer system including an engine/central processing unit, a memory, and an interconnect bus and platform engine/AI engine, and any suitable database system; and a communication network such as the internet; an EHR system and database with API exposed by web browser, and a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification [0026], [0078], [0081], [0088], [00104]-[00106], [00110], [00113])). As such, the limitations amount to no more than mere instructions to implement an abstract idea on a computer or other machinery in tis ordinary capacity, or merely uses a computer other machinery in tis ordinary capacity as a tool to perform an abstract idea. See MPEP 2106.05(f)(2). Further, the additional element of transmitting via a secure connection channeled through a communications network amounts to mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the above bolded additional elements, for example “a platform via a secure communications interface of a computer device,” “by an engine,” “from a policy database,” “via APIs and direct integration,” “from an EHR system,” “a payer policy repository,” “using a machine learning model trained,” “an error detection module,” and sending “via a secure connection channeled through a communication network,” to perform the claim limitations amounts to no more than mere instructions to apply the exception using generic computer components (i.e., a platform computer system including an engine/central processing unit, a memory, and an interconnect bus and platform engine/AI engine, and any suitable database system; and a communication network such as the internet; an EHR system and database with API exposed by web browser, and a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification [0026], [0078], [0081], [0088], [00104]-[00106], [00110], [00113])). Mere instructions to apply an exception using a computer or other machinery in tis ordinary capacity, or merely uses a computer other machinery in tis ordinary capacity as a tool to perform an abstract idea cannot provide an inventive concept. See MPEP 2106.05(f)(2). Further, the additional element of transmitting via a secure connection channeled through a communication network amounts to receiving or transmitting data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The claim is not patent eligible.
Dependent claims 12-17 include limitations of the independent claim and are directed to the same abstract idea as discussed above and incorporated herein. The dependent claims are rejected under 35 U.S.C. § 101 because they are directed to non-statutory subject matter. These additional claims recite what the data is and how it is analyzed. These information characteristics do not integrate the judicial exception into a practical application, and, when viewed individually or as a whole, they do not add anything substantial beyond the abstract idea(s). Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore the dependent claims are rejected under 35 U.S.C. § 101.
Examiner Statement - 35 USC § 102/103
The closest prior art of record – U.S. Patent Application Pub. No. 2024/0029166 A1 (hereinafter “Tabak et al.”), U.S. Patent Application Pub. No. 2021/0357702 A1 (hereinafter “Jaw”), U.S. Patent Application Pub. No. 2016/0125149 A1 (hereinafter “Abramowitz”), U.S. Patent Application Pub. No. 2007/0038482 A1 (hereinafter “Herndon et al.”), and U.S. Patent Application Pub. No. 2023/0317260 A1 (hereinafter “Zahora et al.”) – fails to anticipate or otherwise render obvious the claimed invention as currently recited in independent claims 1 and 11, respectively. In particular, the closest prior art fails to teach in an obvious combination at the time of the effective filing date each limitation (a)-(m) in the particular ordered combination as currently recited in independent claim 1, and as substantially similarly recited in each limitation (a)-(l) of independent claim 11. Accordingly, claims 1-17 are free from the prior art.
Response to Arguments
Applicant's arguments filed 02/17/2026 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed herein below in the order in which they appear in the response filed on 02/17/2026.
In the remarks, Applicant argues in substance that:
Regarding the claim objection of claim 17, Applicant argues that the amendments to the claim renders the objection moot;
Regarding the 112 rejection of claims 1-17, Applicant argues that the amendments to the claims render the rejection moot;
Regarding the 101 rejection of claims 1-17, Applicant argues that the claims recite patent eligible subject matter in light of the newly amended claim limitations; and
Regarding the 103 rejection of claims 1-17, Applicant argues that the prior cited prior art references fail to teach each of the newly amended claim limitations of independent claims 1 and 11.
In response to Applicant’s argument that (a) regarding the claim objection of claim 17, Examiner respectfully is persuaded and has withdrawn the prior objection.
In response to Applicant’s argument that (b) regarding the 112 rejection of claims 1-17, Examiner is persuaded and has withdrawn the prior 112 rejections.
In response to Applicant’s argument that (c) regarding the 101 rejection of claims 1-17, Examiner respectfully disagrees.
First, Examiner respectfully submits that, but for the recitation of general purpose computer components, the claim limitations are directed to an abstract idea of at least Certain Methods of Organizing Human Activity and/or Mental Process of submitting and reviewing authorization requests. That is, the claim limitations (for example as per claim 1), (a)-(l) amount to rules or instructions for managing personal behavior or interactions between people for submitting and reviewing authorization requests but for the recitation of generic computer components. Applicant argues in the remarks that the claims are not directed to a Mental Process. First, Examiner notes that the 101 rejection is primarily focused on the abstract idea of Certain Methods of Organizing Human Activity, but also some limitations, for example limitations (a), (e), (g), (h), (j), and (l), also fall under Mental Processes. Applicant makes a series of remarks about the disambiguated criteria cannot be performed mentally. Examiner respectfully submits that limitation (c) recites rules or instructions for managing personal behavior and interactions between people for first, retrieving requirements according to at least one policy of a second user, and disaggregating the policy into disambiguated criteria that are related to the abstract idea of the policy requirements, and further assigning these criteria with logical operators is determining rules or instructions for analysis to generate a list of coverage criteria. The limitation merely invokes the engine of the platform and a policy database for performing the rules or instructions that are generally linked to a computing environment as “machine comprehensible” and EHR data. Further, limitation (d) recites APIs and data integration at a high level such that it invokes the general purpose component for performing data retrieval at a high level of generality independent of file format and structure. Examiner respectfully submits that the claims are directed to rules or instructions for how the policies are analyzed, and merely invokes high-level recitations of computing components and elements of a computing environment for which the abstract idea is performed by or under such that the claims are still directed to an abstract idea(s) under Step 2A, Prong One.
Second, Applicant argues that the claims recite technical constraints beyond high-level evaluation and insignificant post-solution activity. See Remarks at pgs. 11-12. Examiner respectfully disagrees with Applicant’s analysis. That is, the “technical constraints,” or additional elements of the claim are recited at a high-level of generality (i.e., a platform computer system including an engine/central processing unit, a memory, and an interconnect bus and platform engine/AI engine, and any suitable database system; and a communication network such as the internet; an EHR system and database with API exposed by web browser, and a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification [0026], [0078], [0081], [0088], [00104]-[00106], [00110], [00113])). As such, the limitations amount to no more than mere instructions to implement an abstract idea on a computer or other machinery in tis ordinary capacity, or merely uses a computer other machinery in tis ordinary capacity as a tool to perform an abstract idea. See MPEP 2106.05(f)(2). For example, the claim recites retrieving data “via APIs and data integration … independently of file format or and structure” such that this limitation merely invokes APIs for data retrieval under a computing environment, but does not recite how this is done in any manner beyond an “apply it” standard of invoking a general purpose computer to perform data integration in an ordinary capacity in view of the present Application Specification. See e.g. [0026], [00113], MPEP 2106.05(f)(2). Further, the additional element of transmitting via a secure connection channeled through a communications network amounts to mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of the claim are merely invoked to perform the limitations of the abstract idea under their ordinary capacity. See MPEP 2106.05(f)(2). Examiner further submits that limitations, for example, such as disaggregation of policy documents, per-criterion me/unmet outputs, evidence packaging, guiderails and human error triage are all limitations directed to the abstract idea, but are merely performed using general purpose computer components to automate the evaluation rules or instructions under MPEP 2106.05(f)(2), and therefore, are not analyzed under well-understood routine and conventional activity analysis under Step 2B. The additional element of transmitting via a secure connection channeled through a communication network amounts to receiving or transmitting data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II, and is analyzed under the well-understood, routine, and conventional activity analysis because the limitation is considered extra-solution activity under Step 2A, Prong 2.
Accordingly, Examiner respectfully maintains the 101 rejection of claims 1-17 as applied in the above Office Action.
In response to Applicant’s argument that (d) regarding thee 103 rejection of claims 1-17, Examiner is persuaded.
As discussed above in the Office Action, the closest prior art fails to anticipate or otherwise render obvious the claimed invention of the independent claims in the particular ordered combination as currently recited. Accordingly, Examiner has withdrawn the prior 103 rejections of claims 1-17.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent Application Pub. No. 2023/0214455 A1 the AI/ML reactive claim processing system 800 may be deployed to automatically receive electronic claim remit data and automatically perform analytics and decisioning against a set of pre-stored client-specific rules applicable to one or more payment policies, which trigger automated modifications to the claims data and/or automated alerts for possible modifications to the claims data or additional procedures or services that are ordered or requested for the patient that, if performed, would increase the likelihood of the claim being approved upon resubmission ([0060]);
U.S. Patent Application Pub. No. 2024/0202520 A1 teaches methods and systems for selecting a machine learning model to predict a received authorization request (Abstract); and
U.S. Patent Application Pub. No. 2023/0084146 A1 teaches a method of predicting an outcome of a prior-authorization claim (Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY BALAJ whose telephone number is (571)272-8181. The examiner can normally be reached 8:00 - 4:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long can be reached at (571) 270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.M.B./Examiner, Art Unit 3682
/FONYA M LONG/Supervisory Patent Examiner, Art Unit 3682