Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is a reply to the application filed on 4/12/2026, in which, claims 18-37 are pending. Claims 18, 24, and 34 are independent.
When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments.
Information Disclosure Statement
The information disclosure statement (IDS) submitted is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings filed on 4/12/2026 are accepted.
Specification
The disclosure filed on 4/12/2026 is accepted.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 18-37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
Claims 18-37 are drawn to a method/system for submitting an authorization request, which is within the four statutory categories (i.e. method).
Independent Claim 18 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 18 recites:
18. A method for encoding and disambiguating information, the method comprising:
receiving data of a first user;
generating, using a machine learning model, at least one disaggregate criteria for the inclusion, exclusion, or exception of the data;
storing the disaggregate criteria into a machine comprehensible form for later retrieval.
The above limitations, as drafted, is a method that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic machine learning model. That is, other than reciting the above bolded elements of “using a machine learning model,” nothing in the claim precludes the steps from practically being performed in the mind. For example, but for the “using a machine learning model,” language, receiving data, generating a disaggregate criteria, storing the criteria in the context of the claim encompasses observation, evaluation, judgment, and opinion of user and policy data. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind with a pen/paper but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. (Step 2A, prong 1)
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements of “using a machine learning model,” to perform the claim limitations. The additional elements in each of the steps are recited at a high-level of generality (i.e., a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification Fig. 9, ¶96). As such, the limitations amount to no more than mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. (Step 2A, prong 2)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “using a machine learning model” to perform the claim limitations amounts to no more than mere instructions to apply the exception using a generic computer component. (i.e., a machine learning model relate to a general-purpose computers (Application Specification Fig. 9, ¶96). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See MPEP 2106.05(f). Further, the additional element of receiving/transmitting/storing data are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The claim is not patent eligible. (Step 2B)
Dependent claims 19-23 include limitations of the independent claim and are directed to the same abstract idea as discussed above and incorporated herein. The dependent claims are rejected under 35 U.S.C. § 101 because they are directed to non-statutory subject matter. These additional claims recite what the data is and how it is analyzed. These information characteristics do not integrate the judicial exception into a practical application, and, when viewed individually or as a whole, they do not add anything substantial beyond the observation, evaluation, judgment, and opinion of data. Dependent claims 19-23 recite the additional element of “data includes a policy associated with at least one treatment; wherein the data is a human-readable document; wherein storing the disaggregate criteria comprises disaggregating at least one policy into lists of inclusion criteria, exclusion criteria, and exception rules; wherein storing the disaggregate criteria comprises assigning concatenating criteria with logical operators; and wherein storing the disaggregate criteria comprises storing the disaggregate criteria in a checklist.” Therefore the dependent claims are rejected under 35 U.S.C. § 101.
Claims 24-37 are drawn to a method/system for for encoding and disambiguating payer policies, which is within the four statutory categories (i.e. method/system).
Independent Claim 34 (representative of independent claims 24, 34) is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim recites:
34. A system for encoding and disambiguating payer policies, comprising:
an interface configured to receive, from repositories of health plans, one or more payer policies as human-readable documents;
at least one processor executing a machine learning model configured to generate, from the one or more payer policies, disaggregated criteria for inclusion, exclusion, or exception of members from coverage under the one or more payer policies;
a database storing the disaggregated criteria as machine-comprehensible simplified checklists of criteria organized by treatment for later retrieval
The above limitations, as drafted, is a method/system that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting the above bolded elements of “interface configured to receive, from repositories,” “at least one processor executing a machine learning model,” “a database,” nothing in the claim element precludes the step from practically being performed in the mind with the help of pen/paper. For example, but for the “interface configured to receive, from repositories,” “at least one processor executing a machine learning model,” “a database,” language, receiving data, generating a disaggregate criteria, storing the criteria in the context of the claim encompasses observation, evaluation, judgment, and opinion of user and policy data. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind with a pen/paper but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea in the context of the claim encompasses observation, evaluation, judgment, and opinion of user and policy data. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. (Step 2A, prong 1)
This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements of “interface configured to receive, from repositories,” “at least one processor executing a machine learning model,” “a database,” to perform the claim limitations. The additional elements in each of the steps are recited at a high-level of generality (i.e., a machine learning model such as a trained machine learning algorithm they relate to a general purpose computers (Application Specification Fig. 9, ¶96). As such, the limitations amount to no more than mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. (Step 2A, prong 2)
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “interface configured to receive, from repositories,” “at least one processor executing a machine learning model,” “a database,” to perform the claim limitations amounts to no more than mere instructions to apply the exception using a generic computer component. (i.e., a machine learning model relate to a general-purpose computers (Application Specification Fig. 9, ¶96). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See MPEP 2106.05(f). Further, the additional element of receiving/transmitting/storing data are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The claim is not patent eligible. (Step 2)
Dependent claims 25-33 and 35-37 include limitations of the independent claim and are directed to the same abstract idea as discussed above and incorporated herein. The dependent claims are rejected under 35 U.S.C. § 101 because they are directed to non-statutory subject matter. These additional claims recite what the data is and how it is analyzed. These information characteristics do not integrate the judicial exception into a practical application, and, when viewed individually or as a whole, they do not add anything substantial beyond the observation, evaluation, judgment, and opinion of data. Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore the dependent claims are rejected under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 18-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20250364138 A1 (hereinafter ‘Mathur’) in view of US 20180165423 A1 (hereinafter ‘Esa’).
As regards claim 24, Mathur (US 20250364138 A1) discloses: A method for encoding and disambiguating payer policies, comprising: receiving, by a platform, one or more payer policies as human-readable documents from repositories of health plans; (Mathur, ¶8, ¶23, i.e., receiving treatment authorization request in human readable docs such as faxes, notes, EMRs, including policies and medical history)
generating, using a machine learning model, disaggregated criteria for inclusion, exclusion, or exception of members from coverage under the one or more payer policies; and (Mathur, Figs. 1-2B, ¶8, ¶23, ¶26-¶38, i.e., applying machine trained AI based on the policies to generate an output of finding whether medical criteria is met for authorizing a treatment plan)
Mathur in combination with Esa teaches: storing the disaggregated criteria in a database as machine-comprehensible simplified checklists of criteria organized by treatment for later retrieval. (Mathur, Figs. 1-2B, ¶38-¶40, i.e., AI generated decision clinical cards are later referenced thus implying storage. See, Esa, ¶119, i.e., the generated treatment plans are stored in a database for later retrieval)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Mathur to include storing generated treatment plans in a database for later retrieval as taught by Esa with the motivation to provide treatment authorization/plans for a patient (Esa, ¶68-¶76, ¶82-¶84, ¶119)
Claims 18 and 34 recite substantially the same features recited in claim 24 above and are rejected based on the same rationale.
As regards claim 19, Mathur et al combination teaches the method of claim 18, wherein the data includes at least one policy associated with at least one treatment. (Mathur, ¶8, ¶23)
As regards claim 20, Mathur et al teaches the method of claim 18, wherein the data is a human-readable document. (Mathur, ¶8, ¶23)
As regards claim 21, Mathur et al teaches the method of claim 18, wherein storing the disaggregate criteria comprises disaggregating at least one policy into lists of inclusion criteria, exclusion criteria, and exception rules. (Mathur, Figs. 1-2B, ¶8, ¶23, ¶26-¶38, i.e., applying machine trained AI based on the policies to generate an output of finding whether medical criteria are met for authorizing a treatment plan. See also, Esa, Figs, 1, 4-6, ¶68-¶76,, ¶82-¶84 i.e., given input patient data, applying the matching trained model to generate treatment plan wherein the treatment plan includes one-to-many tradeoffs (i.e., multiple treatment criteria) for the particular patient such as number of beams to use, the dosage amount, the targeted area and excluding/eliminating treatments not needed)
As regards claim 22, Mathur et al teaches the method of claim 18, wherein storing the disaggregate criteria comprises assigning concatenating criteria with logical operators. (Mathur, ¶8-¶9)
As regards claim 23, Mathur et al teaches the method of claim 18, wherein storing the disaggregate criteria comprises storing the disaggregate criteria in a checklist. (Mathur, Figs. 1-2B, ¶38-¶40, i.e., AI generated decision clinical cards are later referenced thus implying storage. See, Esa, ¶119, i.e., the generated treatment plans are stored in a database for later retrieval)
As regards claim 25, Mathur et al teaches the method of claim 24, wherein the one or more payer policies determine coverage criteria of members of a specific health plan as well as medical or administrative guidelines. (Mathur: ¶24-¶27)
As regards claim 26, Mathur et al teaches the method of claim 24, wherein storing the disaggregated criteria comprises disaggregating the one or more payer policies into lists of inclusion criteria, exclusion criteria, and exception rules. (Mathur, Figs. 1-2B, ¶38-¶40, i.e., AI generated decision clinical cards are later referenced thus implying storage. See, Esa, ¶119, i.e., the generated treatment plans are stored in a database for later retrieval)
As regards claim 27, Mathur et al teaches the method of claim 26, wherein storing the disaggregated criteria further comprises assigning concatenating criteria with logical operators to the inclusion criteria, the exclusion criteria, and the exception rules. (Mathur, Figs. 1-2B, ¶8-¶9, ¶38-¶40. See, Esa, ¶119)
As regards claim 28, Mathur et al teaches the method of claim 24, wherein the disaggregated criteria stored in the database are retrievable instantaneously for matching and evaluating policy requirements with electronic health record data when processing service requests. (Mathur, Figs. 1-2B, ¶8, ¶23, ¶26-¶38)
As regards claim 29, Mathur et al teaches the method of claim 24, wherein each of the one or more payer policies covers one or more treatments for at least one medical condition, and the disaggregated criteria for each payer policy are organized and stored by treatment. (Mathur, Figs. 1-2B, ¶8, ¶23, ¶26-¶40)
As regards claim 30, Mathur et al teaches the method of claim 24, further comprising repeating the generating and storing for each payer policy found in a policy repository and for each update of each payer policy. (Mathur, Figs. 1-2B, ¶8, ¶23, ¶26-¶38)
As regards claim 31, Mathur et al teaches the method of claim 24, wherein the simplified checklists of criteria are machine-comprehensible as opposed to original textual formats that are only human- comprehensible. (Mathur, Figs. 1-2B, ¶8, ¶23, ¶26-¶40)
As regards claim 32, Mathur et al teaches the method of claim 27, wherein assigning concatenating criteria with logical operators allows derivation and storage in the database of any inclusion or exclusion logic provided by the one or more payer policies. (Mathur, Figs. 1-2B, ¶8-¶9, ¶24-¶40. See, Esa, ¶119)
As regards claim 33, Mathur et al teaches the method of claim 24, wherein the disaggregated criteria are stored in the database such that policy criteria are retrievable when matching and evaluating policy requirements with electronic health record data for prior authorization requests. (Mathur, Figs. 1-2B, ¶8-¶9, ¶24-¶40. See, Esa, ¶5-¶6)
Claims 37 recites substantially the same features recited in claim 33 above and are rejected based on the same rationale.
As regards claim 35, Mathur et al teaches the system of claim 34, wherein the one or more payer policies determine coverage criteria of members of a specific health plan as well as medical or administrative guidelines, and criteria and rulesets derived from each of the one or more payer policies are organized and stored by treatment in the database. (Mathur: ¶24-¶38)
As regards claim 36, Mathur et al teaches the system of claim 34, wherein the machine learning model is further configured to disaggregate the one or more payer policies into lists of inclusion criteria, exclusion criteria, and exception rules and to assign concatenating criteria with logical operators to the inclusion criteria, the exclusion criteria, and the exception rules such that inclusion or exclusion logic provided by the one or more payer policies is derivable and storable in the database. (Mathur, Figs. 1-2B, ¶8-¶9, ¶24-¶40. See, Esa, ¶119)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED A ZAIDI whose telephone number is (571)270-5995. The examiner can normally be reached Monday-Thursday: 5:30AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SYED A ZAIDI/Primary Examiner, Art Unit 2432