Prosecution Insights
Last updated: April 19, 2026
Application No. 19/025,500

METHOD AND SYSTEM FOR DETECTION OF WASTE, FRAUD, AND ABUSE IN INFORMATION ACCESS USING COGNITIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §101§103
Filed
Jan 16, 2025
Examiner
GO, JOHN PHILIP
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Better Care Technologies LLC
OA Round
1 (Non-Final)
35%
Grant Probability
At Risk
1-2
OA Rounds
4y 0m
To Grant
80%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
101 granted / 290 resolved
-17.2% vs TC avg
Strong +46% interview lift
Without
With
+45.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
56 currently pending
Career history
346
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 290 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 are currently pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 Claims 1-15 are within the four statutory categories. Claims 1-10 are drawn to a method for authenticating a user, which is within the four statutory categories (i.e. process). Claims 11-15 are drawn to a system for authenticating a user, which is within the four statutory categories (i.e. machine). Claims 16-20 are drawn to a “computer readable media,” which does not distinguish Claims 16-20 from a transitory medium or article of manufacture, e.g. see paragraph [0067] of the as-filed Specification, and hence Claims 16-20 do not fall within the four statutory categories. However, as will be shown below, even assuming, arguendo, that Claims 16-20 were directed towards a non-transitory medium or article of manufacture, Claims 16-20 are nonetheless unpatentable under 35 U.S.C. 101. Prong 1 of Step 2A Claim 1, which is representative of the inventive concept, recites: A computer-implemented method for real-time detection, by a participant in a health information exchange, of unapproved uses of health information, the method comprising: building a knowledge graph representing relationships between characteristics of health related information of a patient; receiving, from a second participant, a request for access to health information of the patient; generating, using the knowledge graph, questions about the characteristics of health related information of the patient for the second participant to answer to confirm authenticity of the request; and providing access to the health information to the second participant based on the second participant providing correct responses to the questions. The underlined limitations as shown above, given the broadest reasonable interpretation, cover the abstract idea of a certain method of organizing human activity because they recite managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions – in this case, creating a knowledge graph illustrating the relationships between data for a user, receiving a request for access to user data, generating questions based on the knowledge graph, and providing access to the user data in response to correct responses to the questions encompass limitations that dictate the behavior of a first user (i.e. building the knowledge graph, receiving the request, generating the questions, and providing access) and a second user (i.e. providing the request and providing answers to the questions), wherein rules (i.e. the correctness of the answers) determine whether or not the second user will be provided with access, and hence these limitations include following rules or instructions to determine access to health information), e.g. see MPEP 2106.04(a)(2). Any limitations not identified above as part of the abstract idea are deemed “additional elements,” and will be discussed in further detail below. Furthermore, the abstract idea for Claims 11 and 16 is identical as the abstract idea for Claim 1, because the only difference between Claims 1, 11, and 16 is that Claim 11 recites a system and Claim 16 recites a computer readable media, whereas Claim 1 recites a computer-implemented method. Dependent Claims 2-10, 12-15, and 17-20 include other limitations, for example Claims 2 and 12 recite denying access when the responses are incorrect and notifying the first user as to the denial, Claims 3 and 13 recite building knowledge graphs for a plurality of patients and identifying, based on the knowledge graphs, a group of patients sharing at least one characteristic that make the group susceptible to requests for the data for unapproved uses, Claims 4 and 14 recite building knowledge graphs for a plurality of prescribed item and identifying, based on the knowledge graphs, a prescribed item that makes patients who are prescribed the item susceptible to requests for the data for unapproved uses, Claims 5 and 15 recite identifying, based on the knowledge graphs for the prescribed items, a pattern of an entity requesting data for unapproved uses, Claims 6 and 17 recite determining a motive for the request of data, Claims 7 and 19 recite that the questions do not reveal protected health information for the first user, Claims 8 and 18 recite determining a distance between the first user and the second user, comparing the distance to a threshold, and denying access to the data in response to the distance satisfying the threshold, Claim 9 recites determining a probability of unapproved use of the data based on a plurality of data and determining whether to deny or provide access to the data based on the probability, and Claims 10 and 20 recite providing real-time access to the data in response to the correct responses to the questions, but these only serve to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g. see MPEP 2106.04. Additionally, any limitations in dependent Claims 2-10, 12-15, and 17-20 not addressed above are deemed additional elements to the abstract idea, and will be further addressed below. Hence dependent Claims 2-10, 12-15, and 17-20 are nonetheless directed towards fundamentally the same abstract idea as independent Claims 1, 11, and 16. Prong 2 of Step 2A Claims 1, 11, and 16 are not integrated into a practical application because the additional elements (i.e. the non-underlined limitations above – in this case, the computer) amount to no more than limitations which: amount to mere instructions to apply an exception – for example, the recitation of a computer, which amounts to merely invoking a computer as a tool to perform the abstract idea, e.g. see paragraphs [0023]-[0032] of the as-filed Specification, see MPEP 2106.05(f); and/or generally link the abstract idea to a particular technological environment or field of use – for example, the claim language specifying that the data being accessed is health information, which amounts to limiting the abstract idea to the field of healthcare, see MPEP 2106.05(h). Additionally, dependent Claims 2-10, 12-15, and 17-20 include other limitations, but these limitations also amount to no more than generally linking the abstract idea to a particular technological environment or field of use (e.g. the machine learning model limitation recited in dependent Claims 10 and 20), and/or do not include any additional elements beyond those already recited in independent Claims 1, 11, and 16, and hence also do not integrate the aforementioned abstract idea into a practical application. Step 2B Claims 1, 11, and 16 do not include additional elements that are sufficient to amount to “significantly more” than the judicial exception because the additional elements (i.e. the non-underlined limitations above – in this case the computer), as stated above, are directed towards no more than limitations that amount to mere instructions to apply the exception, generally link the abstract idea to a particular technological environment or field of use, and/or add insignificant extra-solution activity to the abstract idea, wherein the insignificant extra-solution activity comprises limitations which: amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrated by: The present Specification expressly disclosing that the structural additional elements are well-understood, routine, and conventional in nature: paragraphs [0023]-[0032] of the as-filed Specification discloses that the additional elements (i.e. the computer) comprise a plurality of different types of generic computing systems that are configured to perform generic computer functions (i.e. receive and process data) that are well-understood, routine, and conventional activities previously known to the pertinent industry (i.e. healthcare); Relevant court decisions: The functional limitations interpreted as additional elements are analogized to the following examples of court decisions demonstrating well-understood, routine and conventional activities, e.g. see MPEP 2106.05(d)(II): Receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec – similarly, the current invention receives the request to access the health information over a network, for example a wireless network, e.g. see paragraph [0028] of the present Specification; Storing and retrieving information in memory, e.g. see Versata Dev. Group, Inc. v. SAP Am., Inc. – similarly, the current invention recites storing the knowledge graph data, and retrieving the knowledge graph data from in order to generate the questions, and in order to determine whether to provide access to the health information based on the responses to the questions; Dependent Claims 2-10, 12-15, and 17-20 include other limitations, but none of these limitations are deemed significantly more than the abstract idea because, as stated above, the limitations of the aforementioned dependent claims amount to no more than generally linking the abstract idea to a particular technological environment or field of use (e.g. the machine learning model limitation recited in dependent Claims 10 and 20), and/or do not recite any additional elements not already recited in independent Claims 1, 11, and 16, and hence do not amount to “significantly more” than the abstract idea. Thus, taken alone, the additional elements do not amount to significantly more than the abstract idea identified above. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation. Therefore, whether taken individually or as an ordered combination, Claims 1-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 11, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bulleit (US 2020/0226285) in view of Allen (US 2018/0089382). Regarding Claim 1, Bulleit teaches the following: A computer-implemented method for real-time detection, by a participant in a health information exchange, of unapproved uses of health information (The system regulates access to health information resources (HIRs) stored within various electronic health records (EHRs) for patients, e.g. see Bulleit [0002], [0010]-[0011], and [0243]-[0245], wherein the system may be embodied as a computer-implemented method, e.g. see Bulleit [0252], Claim 1.), the method comprising: receiving, from a second participant, a request for access to health information of the patient (The system receives a request for an HIR, for example from a doctor for the patient (i.e. a second participant), e.g. see Bulleit [0011], [0050], and [0076]-[0077], Figs. 2-3 and 14.); generating, using data from a healthcare record, questions about the characteristics of health related information of the patient for the second participant to answer to confirm authenticity of the request (The system generates a set of challenge questions based on data obtained from the EHR, e.g. see Bulleit [0236]-[0241], Figs. 22A-22B. The system further receives answers to the challenge questions and calculates a verification score based on the received answers, and utilizes the verification score to determine whether the user’s identity should be verified, e.g. see Bulleit [0234] and [0242]-[0245].); and providing access to the health information to the second participant based on the second participant providing correct responses to the questions (The system verifies the identity of the user based on the user providing a sufficient number of correct answers to the challenge questions, e.g. see Bulleit [0242]-[0245], Figs. 22A-22B, and provides access to the health information to verified users, e.g. see Bulleit [0023]-[0024].). But Bulleit does not teach and Allen teaches the following: building a knowledge graph representing relationships between characteristics of health related information of a patient (The system generates a knowledge graph using data obtained from a patient electronic medical record (EMR), e.g. see Allen [0007], [0101], and [0123].); and wherein the generating of the questions is based on the knowledge graph (The system includes a Question Answer (QA) pipeline that receives an input question and transforms the input question into queries, and applies the queries to a corpus of data, e.g. see Allen [0063]-[0064], wherein the corpus of data is organized by the knowledge graph, e.g. see Allen [0078]-[0079].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Bulleit to incorporate generating the knowledge graph from the EMR data and utilizing the knowledge graph to generate the questions as taught by Allen in order to improve the accuracy of clinical decision-making and improve the artificial intelligence power, e.g. see Allen [0027]. Regarding Claims 11 and 16, the limitations of Claims 11 and 16 are substantially similar to those claimed in Claim 1, with the sole difference being that Claim 11 recites a system including a memory device storing instructions and a processing device executing the stored instructions, and Claim 16 recites a computer readable media storing instructions that are executable by a processor, whereas Claim 1 recites a computer-implemented method. Specifically pertaining to Claims 11 and 16, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claim 1 are similarly applied to Claims 11 and 16. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bulleit and Allen in view of Stockton (US 2009/0089094). Regarding Claim 2, the combination of Bulleit and Allen teaches the limitations of Claim 1, and Bulleit further teaches the following: The computer-implemented method of claim 1, further comprising: denying access to the health information to the second participant based on the second participant providing incorrect responses to the questions (The system may deny access to the EHRs and HIRs based on a user not answering enough challenge questions correctly, e.g. see Bulleit [0198] and [0236]-[0245].). But the combination of Bulleit and Allen does not teach and Stockton teaches the following: notifying the participant of a denial of access to the second participant to the health information (The system may notify another entity, for example security (i.e. the participant), when access to the patient’s medical data is denied, and store an indication of the denial in an audit log, e.g. see Stockton [0025], [0032], and [0039].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate notifying another entity, for example security, of possible fraud as taught by Stockton in order to assure that individuals’ protected health information is properly protected, in accordance with applicable laws, e.g. see Stockton [0002]-[0006]. Regarding Claim 12, the limitations of Claim 12 are substantially similar to those claimed in Claim 2, with the sole difference being that Claim 12 recites a system including a memory device storing instructions and a processing device executing the stored instructions, whereas Claim 2 recites a computer-implemented method. Specifically pertaining to Claim 12, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claim 2 are similarly applied to Claim 12. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bulleit and Allen in view of Gunjan (US 2016/0132969). Regarding Claim 3, the combination of Bulleit and Allen teaches the limitations of Claim 1, and Allen further teaches the following: The computer-implemented method of claim 1, further comprising: building knowledge graphs for a plurality of patients including the patient, each knowledge graph of the knowledge graphs representing relationships between characteristics of health related information of a patient of the plurality of patients (The system generates knowledge graphs utilizing data for a plurality of patients, e.g. see Allen [0091], [0096], and [0101].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Bulleit to incorporate generating knowledge graphs for a plurality of patients as taught by Allen in order to improve the accuracy of clinical decision-making and improve the artificial intelligence power for a wide array of patients, e.g. see Allen [0027]. But the combination of Bulleit and Allen does not teach and Gunjan teaches the following: identifying, based on the knowledge graphs for the plurality of patients, a group of patients of the plurality of patients sharing one or more characteristics of health related information that makes the group of patients susceptible for requests of health information for unapproved uses (The system maintains medical reports (i.e. knowledge graphs) of patients, wherein the medical reports include patient details such as age, diagnoses, and prescriptions, e.g. see Gunjan [0028]. Additionally, the system identifies patients according to a shared characteristic, for example age, in determining the prominence of fraud (i.e. unapproved uses) among the patients of the shared characteristic, e.g. see Gunjan [0073].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate identifying a prominence of fraud for a specific patient group as taught by Gunjan in order to reduce the losses incurred by fraud, e.g. see Gunjan [0095]. Regarding Claim 13, the limitations of Claim 13 are substantially similar to those claimed in Claim 3, with the sole difference being that Claim 13 recites a system including a memory device storing instructions and a processing device executing the stored instructions, whereas Claim 3 recites a computer-implemented method. Specifically pertaining to Claim 13, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claim 3 are similarly applied to Claim 13. Claims 4-5 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bulleit and Allen in view of Eidex (US 2003/0229519). Regarding Claim 4, the combination of Bulleit and Allen teaches the limitations of Claim 1, but does not teach and Eidex teaches the following: The computer-implemented method of claim 1, further comprising: building knowledge graphs for a plurality of prescribed items, each knowledge graph of the knowledge graphs representing relationships between characteristics related to a prescribed item of the plurality of prescribed items (The system includes a database that stores various data (i.e. relationships between characteristics) pertaining to a drug (i.e. a prescribed item), such as common dosage values and likelihood indicators regarding fraud and abuse, e.g. see Eidex [0035].); and identifying, based on the knowledge graphs for the plurality of prescribed items, a prescribed item of the plurality of prescribed items having a characteristic that makes patients who are prescribed the item susceptible for requests to health information for unapproved uses (The system utilizes expert rules to evaluate a likelihood of a fraudulent transaction, wherein the expert rules may factor in the type of drug prescribed in calculating the likelihood, e.g. see Eidex [0042].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate identifying the prescribed item that makes patients susceptible to requests for unapproved uses as taught by Eidex in order to prevent fraudulent activities, e.g. see Eidex [0007]. Regarding Claim 5, the combination of Bulleit, Allen, and Eidex teaches the limitations of Claim 4, and Eidex further teaches the following: The computer-implemented method of claim 1, further comprising: identifying, based on the knowledge graphs for the plurality of prescribed items, a pattern of an entity requesting health information for unapproved uses of health information (The system identifies short-term transaction patterns to determine a fraud score, e.g. see Eidex [0012] and [0015], wherein the fraud score is indicative of the likelihood of fraud, e.g. see Eidex [0042].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate utilizing the patterns to determine the fraud score as taught by Eidex in order to prevent fraudulent activities, e.g. see Eidex [0007]. Regarding Claims 14-15, the limitations of Claims 14-15 are substantially similar to those claimed in Claims 4-5, with the sole difference being that Claims 14-15 recite a system including a memory device storing instructions and a processing device executing the stored instructions, whereas Claims 4-5 recite a computer-implemented method. Specifically pertaining to Claims 14-15, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claims 4-5 are similarly applied to Claims 14-15. Claims 6, 8, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bulleit and Allen in view of Subramanian (US 8,931,044). Regarding Claim 6, the combination of Bulleit and Allen teaches the limitations of Claim 1, but does not teach and Subramanian teaches the following: The computer-implemented method of claim 1, further comprising: determining a motive for the request based on the knowledge graph and details associated with the request and the second participant (The system includes storing scanned patient documents (i.e. knowledge graphs), e.g. see Subramanian col. 5, line 16 through col. 6, line 18, and further stores details regarding individuals such as the address of an individual, e.g. see Subramanian col. 6, lines 44-61. Additionally, the system includes security policy that determines that a user’s reason (i.e. motive) for access a patient record is illegitimate, for example based on the address of the requester (i.e. details associated with the second participant), e.g. see Subramanian col. 14, lines 4-33.). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate determining the motive for accessing the record as taught by Subramanian in order to comply with applicable privacy laws, e.g. see Subramanian col. 1, lines 23-38. Regarding Claim 8, the combination of Bulleit and Allen teaches the limitations of Claim 1, but does not teach and Subramanian teaches the following: The computer-implemented method of claim 1, further comprising: determining a distance between a location of the patient and a second location of the second participant (The system determines whether an address for a patient and a doctor who wishes to access the patient record are within the same neighborhood, e.g. see Subramanian col. 14, lines 14-17.); determining whether the distance satisfies a threshold distance (The addresses being in the same neighborhood may cause the system to determine that a threshold distance has been satisfied such that the system prevents the doctor from accessing the patient record, e.g. see Subramanian col. 14, lines 4-33.); and responsive to determining that the distance satisfies the threshold distance, denying access to the health information to the second participant (The system denies access to the patient record when it is determined that the doctor is within the same neighborhood (i.e. has satisfied the threshold distance), e.g. see Subramanian col. 14, lines 4-33.). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate determining access permissions based on a distance as taught by Subramanian in order to ensure that the access request is for medical reasons, e.g. see Subramanian col. 14, lines 22-25. Regarding Claims 17 and 18, the limitations of Claims 17 and 18 are substantially similar to those claimed in Claims 6 and 8, with the sole difference being that Claims 17 and 18 recite a computer readable media storing instructions that are executable by a processor, whereas Claims 6 and 8 recite a computer-implemented method. Specifically pertaining to Claims 17 and 18, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claims 6 and 8 are similarly applied to Claims 17 and 18. Claims 7 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bulleit and Allen in view of Shoenhair (US 2006/0026039). Regarding Claim 7, the combination of Bulleit and Allen teaches the limitations of Claim 1, but does not teach and Shoenhair teaches the following: The computer-implemented method of claim 1, wherein the questions do not reveal protected health information (PHI) of the patient (The system includes requires a requester to satisfy a challenge response in order to verify the requester’s identity and to receive protected health information, wherein the challenge response pertains to one or more shared facts of federal government-recognized facilities that treat Medicare patients, all medical practitioners licensed to prescribe medications licensed by the FDA, and facsimile numbers, e.g. see Shoenhair [0038]-[0039] – that is, none of the aforementioned shared facts comprise PHI of the patient, and hence the questions do not reveal patient PHI.). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate the questions not revealing patient PHI as taught by Shoenhair in order to prevent comprising of the PHI and to maintain security of the PHI, e.g. see Shoenhair [0026] and [0033]. Regarding Claim 19, the limitations of Claim 19 are substantially similar to those claimed in Claim 7, with the sole difference being that Claim 19 recites a computer readable media storing instructions that are executable by a processor, whereas Claim 7 recites a computer-implemented method. Specifically pertaining to Claim 19, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claim 7 are similarly applied to Claim 19. Claims 9-10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bulleit and Allen in view of Neuweg (US 2021/0035121). Regarding Claim 9, the combination of Bulleit and Allen teaches the limitations of Claim 1, but does not teach and Neuweg teaches the following: The computer-implemented method of claim 1, further comprising: determining a probability of unapproved use of health information based on a plurality of factors comprising receiving the correct responses to the questions (The system determines a likelihood of fraud (i.e. a probability of unapproved use) based on user responses to questions, e.g. see Neuweg [0026] and [0035].), determining requests are received for a cluster of patients prescribed a certain medication, determining a plurality of requests are received from the second participant having a common medical identity, determining a plurality of requests are received within a threshold time period for the cluster of patients from a plurality of second participants having different medical identities, or some combination thereof; and determining whether to provide access to the health information based on the probability of unapproved use (Initially, Examiner notes that this step is only required if the “some combination thereof” language recited in the previous step includes the specific step of determining of the probability of unapproved use, as opposed to some combination of determining steps not including the probability calculation. The system includes rules that dictate actions for the system to take based on the calculated likelihood of fraud, for example freezing an account (i.e. denying access), or permitting an action (i.e. providing access), e.g. see Neuweg [0034]-[0035], [0041]-[0042], [0051].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate determining a probability of fraudulent activity based on the answers to questions as taught by Neuweg in order to greatly reduce costs to account holders and authorities, e.g. see Neuweg [0003]. Regarding Claim 10, the combination of Bulleit and Allen teaches the limitations of Claim 1, but does not teach and Neuweg teaches the following: The computer-implemented method of claim 1, wherein a trained machine learning model provides, in real-time, access to the health information to the second participant based on the second participant providing correct responses to the questions (The system tracks data and performs actions, for example freezing an account and/or sending an alert, in real time, e.g. see Neuweg [0003], [0040], and [0045], wherein the actions are taken based on a likelihood of fraud that is calculated utilizing machine learning algorithms, e.g. see Neuweg [0042].). Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of regulating access to secure information to modify the combination of Bulleit and Allen to incorporate utilizing the machine learning algorithm to perform fraud prevention actions in real time as taught by Neuweg in order to greatly reduce costs to account holders and authorities, e.g. see Neuweg [0003]. Regarding Claim 20, the limitations of Claim 20 are substantially similar to those claimed in Claim 10, with the sole difference being that Claim 20 recites a computer readable media storing instructions that are executable by a processor, whereas Claim 10 recites a computer-implemented method. Specifically pertaining to Claim 20, Examiner notes that Bulleit teaches computer-executable instructions loaded onto a computer and/or processor to perform the limitations of the invention, e.g. see Bulleit [0255], and hence the grounds of rejection provided above for Claim 10 are similarly applied to Claim 20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows: Thavasi (US 2013/0067547) – teaches an information handling system (IHS) that regulates access to data, prevents unauthorized access and permits authorized users to access the data by authenticating a user utilizing a security measure such as a challenge question seeking a particular answer from the user. Additionally teaches that the authentication may be performed continuously in real time. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN P GO whose telephone number is (703)756-1965. The examiner can normally be reached Monday-Friday 9am-6pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER H CHOI can be reached at (469)295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN P GO/Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597521
SURVEY-BASED DIAGNOSIS METHOD AND SYSTEM THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12580078
METHOD, SERVER, AND SYSTEM INTELLIGENT VENTILATOR MONITORING USING NON-CONTACT AND NON-FACE-TO-FACE
2y 5m to grant Granted Mar 17, 2026
Patent 12548079
SYSTEMS AND METHODS FOR DETERMINING AND COMMUNICATING PATIENT INCENTIVE INFORMATION TO A PRESCRIBER
2y 5m to grant Granted Feb 10, 2026
Patent 12537108
APPARATUS AND METHOD FOR PROVIDING HEALTHCARE SERVICES REMOTELY OR VIRTUALLY WITH OR USING AN ELECTRONIC HEALTHCARE RECORD AND/OR A COMMUNICATION NETWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12537080
EHR SYSTEM WITH ALERT FOOTER AND RELATED METHODS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
35%
Grant Probability
80%
With Interview (+45.7%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 290 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month