Prosecution Insights
Last updated: April 19, 2026
Application No. 18/366,445

SYSTEMS AND METHODS FOR IDENTIFYING A HIGH-PROFILE PATIENT

Non-Final OA §101§102§112
Filed
Aug 07, 2023
Examiner
ELSHAER, ALAAELDIN M
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BLUESIGHT, INC.
OA Round
7 (Non-Final)
36%
Grant Probability
At Risk
7-8
OA Rounds
2y 10m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
74 granted / 208 resolved
-16.4% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
37 currently pending
Career history
245
Total Applications
across all art units

Statute-Specific Performance

§101
37.4%
-2.6% vs TC avg
§103
36.7%
-3.3% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION This office action is based on the amended claim set submitted and filed on 03/03/2026. Claims 1, 6, 9, 11, 13, 16, and 20-21 has been amended. Claim 12, 17-18 have been canceled. Claims 1-11, 13-16 and 19-21 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/03/2026 has been entered. Information Disclosure Statement The information disclosure statements (IDSs) submitted on 03/03/2026 are in accordance with the provisions of 37 CFR 1.97 and are considered by the Examiner. Claim Rejections - 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim(s) 1-11, 13-16 and 19-21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. In order to satisfy the written description requirement, the specification must describe the claimed invention in sufficient detail that one skilled in the art can reasonably conclude that the inventor had possession of the claimed invention. See MPEP 2161.01(1). However, generic claim language in the original disclosure does not satisfy the written description requirement if it fails to support the scope of the genus claimed, and even original claims may fail to satisfy the written description requirement when the invention is claimed and described in functional language but the specification does not sufficiently identify how the invention achieves the claimed function, See MPEP 2161.01(1) citing in part Ariad, 598 F.3d at 1349 ("[A]n adequate written description of a claimed genus requires more than a generic statement of an invention's boundaries."). Specifically, with regard to computer-implemented functional claims, the specification must provide a disclosure of the computer and the algorithm in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention, including how to program the disclosed computer to perform the claimed function. MPEP 2161.01(1). Claim 1, 16, and 20 recite “automatically generating a first database entry that includes computer-readable links to the one or more web pages...”, for which the subject matter of the limitation was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. As best understood, it appears that there is no support for the underlined recitation in the original disclosure of the present application for this limitation. As described in applicant’s specification [0022], the description discloses “Each entry may include information from an end user identifying the patient as high-profile...”, and see also [0064] “The GUI may further enable a user to add a new entry to the high-profile database system...”. There is not explicit disclosure as filed describing automatically generating database entry as claimed. The examiner takes the position that with respect to these limitations or features of the claims, the specification fails to provide an adequate written description of the invention to an extent that would sufficiently show that applicant was in possession of an invention that could operate as claimed. Simply disclosing a vague description, without actually explaining how to perform the function(s) claimed, results in a written description problem under 112(a). The examiner has no idea how applicant actually contemplated doing these steps because nothing is disclosed other than the broad disclosure of the specification as mentioned above. Therefore, applicant has failed to show the actual subject matter in their possession at the time of the invention in a way sufficient to reasonably convey to one skilled in the relevant art that applicant had possession of the claimed invention at the time the application was filed. Therefore, these limitations of the claims are considered to be new matter. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-11, 13-16 and 19-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-11, 13-16, 19, and 21 are drawn to a method, and Claim 20 is drawn to a system, which are within the four statutory categories (i.e., a machine and a process). Claims 1-11, 13-16 and 19-21 are further directed to an abstract idea on the grounds set out in detail below. Under Step 2A, Prong 1, the steps of the claim for the invention represents an abstract idea of a series of steps that recite a process for identifying a high-profile patient(s) and risk level and determining data access. Collecting a patient(s) data from different sources, analyzing, and determine the profile of the patient and risk are steps that could have been performed by a human performing the steps arranging patient data citing an abstract idea directed to organizing and managing patient information but for the fact that the claims recite a general-purpose computer processor to implement the abstract idea for which both the instant claims and the abstract idea are defined as Methods of Organizing Human Activity. Independent Claim 1 recites the steps of: determining, by one or more processors, a patient entity; obtaining, by the one or more processors, a first set of data associated with the patient entity from one or more database systems, the first set of data comprising first entity identification data; obtaining, by the one or more processors and using the first set of data, a second set of data from one or more web pages, the second set of data comprising second entity identification data and risk level indication data; comparing, by the one or more processors, the first entity identification data with the second entity identification data; determining, by the one or more processors, that a threshold match exists between the first entity identification data and the second entity identification data based on the comparing the first entity identification data with the second entity identification data; generating a risk level associated with the patient entity and a confidence rate, using a trained machine learning model, the risk level indication data and risk criteria; the machine learning model trained using first risk level indication training data obtained from a second one or more web pages and the risk criteria and/or second risk level indication training data provided via user input to determine an association between the one or more risk levels and one or more factors identified in the first risk level indication training data and/or the second risk level indication training data based on weights or biases assigned to the one or more factors; determining, by the one or more processors, that a threshold risk exists with respect to the patient entity based on the generated risk level and the confidence rate; associating, by the one or more processors, the patient entity with a high-profile indicator based upon determining that the threshold risk exists, wherein associating the patient entity with the high-profile indicator comprises: automatically generating a first database entry that includes computer-readable links to the one or more web pages used to determine the threshold risk exist and one or more computer-readable pointers connecting the high-profile indicator to a patient entity database entry in the one or more database systems associated with the patient entity, and storing the first database entry in the one or more database systems; upon determining that the threshold risk exists with respect to the patient entity, generating a first anomalous activity threshold associated with entities associated with high-profile indicators and a second anomalous activity threshold associated with entities not associated with the high-profile indicators; monitoring access events to one or more records from the one or more database systems; generating suspicion scores associated with user access of the one or more records; comparing the suspicion scores associated with user access to records of entities associated with the high-profile indicators to the first anomalous activity threshold; comparing the suspicion scores associated with user access to records of entities not associated with the high-profile indicators to the second anomalous activity threshold; based on the comparing the suspicion scores to the first anomalous activity threshold and based on the comparing the suspicion scores to the second anomalous activity threshold, determining one or more potential breaches associated with the access events; and generating alerts based on the one or more potential breaches”. Independent Claim 16 recites the steps of: “obtaining, by one or more processors, a first set of data associated with a patient entity from one or more database systems, the first set of data comprising first entity identification data; obtaining, by the one or more processors, a second set of data from one or more web pages, the second set of data comprising second entity identification data and risk level indication data; generating, by the one or more processors and using a machine learning model and the risk level indication data, one or more risk levels associated with a patient entity and a confidence rate, the machine learning model trained using first risk level indication training data obtained from a second one or more web pages and risk criteria and/or second risk level indication training data provided via user input to determine an association between the one or more risk levels and one or more factors identified in the first risk level indication training data and/or the second risk level indication training data based on weights or biases assigned to the one or more factors, wherein the one or more risk levels including at least one of i) a first risk level determined based on first risk level indication data or ii) a second risk level provided via user input, the user input further including second risk level indication data; displaying, by the one or more processors and via a graphical user interface (GUI), the one or more risk levels and corresponding of the first and/or second risk level indication data for validation; receiving, by the one or more processors and via the GUI, a feedback validating or invalidating each of the one or more risk levels; updating, by the one or more processors and using the feedback and a set of features derived from the first and/or second risk level indication data corresponding to the one or more risk levels, the machine learning model configured to classify the patient entity into an appropriate risk level; analyzing the risk level indication data for the patient entity against the risk criteria; determining that a threshold risk exists with respect to the patient entity based on the analyzing the risk level indication data and the confidence rate; associating the patient entity with an elevated risk level based upon determining that the threshold risk exists, wherein associated the patient entity with the elevated risk level comprises: automatically generating a first database entry associated with the patient entity that includes computer-readable links to the one or more web pages used to determine the threshold risk exists, one or more computer-readable pointers connecting the elevated risk level to a patient entity database entry in one or more database systems associated with the patient entity, and the feedback, and storing the first database entry in the one or more database systems; upon determining that the threshold risk exists, generating a first anomalous activity threshold associated with entities associated with the elevated risk level and a second anomalous activity threshold associated with entities not associated with the elevated risk level; monitoring access events to one or more records from the one or more database systems; generating suspicion scores associated with user access of the one or more records; comparing the suspicion scores associated with user access to records of entities associated with the elevated risk level to the first anomalous activity threshold; comparing the suspicion scores associated with user access to records of entities not associated with the elevated risk level to the second anomalous activity threshold; based on the comparing the suspicion scores to the first anomalous activity threshold and based on the comparing the suspicion scores to the second anomalous activity threshold, determining one or more potential breaches associated with the access events; and generating alerts based on the one or more potential breaches”. Independent Claim 20 recites the following steps: “a memory storing instructions and a trained machine learning model configured to classify patient entities into appropriate risk levels; and at least one processor operatively connected to the memory and configured to execute the instructions to perform operations including: determining a patient entity from a database system; obtaining at least one of first risk level indication data associated with the patient entity from one or more web pages and second risk level indication data associated with the patient entity from user input, the second risk level indication data including a risk level; deriving a set of features from the first risk level indication data and the second risk level indication data; providing the set of features to the trained machine learning model; generating, using the trained machine learning model, an updated risk level associated with the patient entity and a confidence rate, the machine learning model trained by using first risk level indication training data obtained from a second one or more web pages and risk criteria and/or second risk level indication training data provided via user input to determine an association between one or more of the risk levels and one or more factors identified in the first risk level indication training data and/or the second risk level indication training data based on weights or biases assigned to the one or more factors; and associating the patient entity with a risk level indicator corresponding to the determined updated risk level and the confidence rate, wherein associating the patient entity with the risk level indicator comprises: automatically generating a first database entry that includes computer- readable links to the one or more web pages used to generate the updated risk level and one or more computer-readable pointers connecting the risk level indicator to a patient entity database entry in the database system associated with the patient entity, and storing the first database entry in the database system; and upon determining that the updated risk level exceeds a threshold risk level, associating the patient entity with a high-profile status and generating a first anomalous activity threshold associated with entities associated with high-profile statuses and a second anomalous activity threshold associated with entities not associated with high-profile statuses; monitoring access events to one or more records from the database system; generating suspicion scores associated with user access of the one or more records; comparing the suspicion scores associated with user access to records of entities associated with the high-profile statuses to the first anomalous activity threshold; comparing the suspicion scores associated with user access to records of entities not associated with the high-profile statuses to the second anomalous activity threshold; based on the comparing the suspicion scores to the first anomalous activity threshold and based on the comparing the suspicion scores to the second anomalous activity threshold, determining one or more potential breaches associated with the access events; and generating alerts based on the one or more potential breaches”. The limitations, as drafted, given the broadest reasonable interpretation, cover performance of the limitations by a human user/actor interacting with a computing system that constitute certain methods of organizing human activity along with mental process, thus, an abstract idea, but for the recitation of generic computer components. The claimed concept encompasses a user to performance of the limitations for obtaining a patient identity pr profile from a medical record, compare the information, determine a risk level for the patient profile, and associate the risk level to level of the patient profile and monitor access to the patient information, which are steps that could be performed by a human actor interacting with other user(s) and/or a machine as such identifying an abstract idea. This abstract idea could have been performed by a human actor but for the fact that the claims recite a general-purpose computer processor to implement the abstract idea for configuring and obtaining, comparing, associating, monitoring, information of a patient following instruction. If a claim limitation(s), under its broadest reasonable interpretation, covers performance of the limitation(s) by a human actor but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim limitations (in BOLD) recite an abstract idea. Any limitations not identified above as part of methods of organizing human activity are deemed "additional elements," and will be discussed in further detail below. Under Step 2A, Prong 2, this judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract ideas, linking the abstract idea to a particular technological environment. In particular, the claims recite the additional elements such as “processor, memory, web pages, machine learning model, database, graphical user interface, network” that is/are recited at a high - level of generality (i.e., as generic processors, machine learning model) to perform the steps, “updating...”, “store[ing]…”, “display[ing]…”, “provide[ing]… that iteratively takes input data and analyzes said data to determine an output to performing generic computer functions for determining risk level(s) such that it amounts no more than adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP 2106.05(f) (e.g. “automatically generating a first database entry that includes computer-readable links to the one or more web pages used to determine the threshold and one or more computer-readable pointers connecting the high-profile indicator to a patient entity database entry in the one or more database systems associated with the patient entity...”), generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h), and a mere data gathering process that does not add a meaningful limitation to the above abstract idea, see MPEP 2106.05(g). As set forth in the 2019 Eligibility Guidance, 84 Fed. Reg. at 55 "merely include[ing] instructions to implement an abstract idea on a computer" is an example of when an abstract idea has not been integrated into a practical application. Accordingly, looking at the claim as a whole, individually and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Under step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they do not present improvements to another technology or technical field and the additional elements amount to no more than a generic computer components, recited at a high level of generality, that amounts to no more than adding the words "apply it" (or an equivalent) to apply the exception using generic computer component, see MPEP 2106.05(f), and a mere data gathering process that does not add a meaningful limitation to the above abstract idea, see MPEP 2106.05(g). There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and mere instructions to apply an exception using a generic computer component to the abstract idea, for example, applying a machine-learning model to the abstract idea cannot provide an inventive concept, see Alice, 573 U.S. at 223 ("mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention"). Therefore, whether considered alone or in combination, the additional elements do not amount to significantly more than the abstract idea. Dependent Claims 2-11, 13-15, 19, and 21 include all of the limitations of claim(s) 1 and 16, and therefore likewise incorporate the above-described abstract idea. While the depending claims add additional limitations, such as As for claims 2-4, 11, 13-15, 19, and 21 recite limitations that are under the broadest reasonable interpretation, further define the abstract idea noted in the independent claim(s) that covers a performing the steps by a human actor associating/comparing/organizing/monitoring patient data, which is a certain methods of organizing human activity, but for, the recitation of the generic computer components which are similarly rejected because, neither of the claims, further, defined the abstract idea and do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more"). As for claims 5-10, the claim(s) recite limitations that are under the broadest reasonable interpretation, further define the abstract idea noted in the independent claim(s) that covers a performing the steps by a human actor associating/comparing/organizing/monitoring patient data, which is a certain methods of organizing human activity, but for, the recitation of the generic computer components which are similarly rejected because, neither of the claims, further, defined the abstract idea and do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible. The claims recite additional elements “graphical user interface, machine learning model, processors, database, web pages”. This recitation of additional elements in the claim(s), is merely implemented as a tool such that it amounts no more than adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform the steps (e.g. “train[ing]…”, “display[ing]…”), of an abstract idea, see MPEP 2106.05(f), and a mere data gathering process that does not add a meaningful limitation to the above abstract idea, see MPEP 2106.05(d). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more"). Response to Amendment Applicant's arguments filed 03/03/2026 have been fully considered by the Examiner and addressed as the following: In the remarks, Applicant argues in substance that: Applicant's arguments with respect to the 35 U.S.C. § 101 rejection on page 10-15. On page 12 of the remarks, Applicant argues that the claims are not directed to a judicial exception arguing “Amended Claim 1 is directed to a computer-implemented method that can automatically identify "breaches in patient data". The claimed method is "necessarily rooted in computer technology" in order to overcome a problem specifically arising in medical data, such as identifying data breaches in medical data. See DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1256 (Fed. Cir. 2014),” Examiner respectfully disagrees. The claims are given their broadest reasonable interpretation for the purpose of determining whether they encompass a judicial exception. The claim limitations, given their broadest reasonable interpretation, recite steps, i.e., analyzing risk level to patient information based on the received information of the patient and monitoring access to the patient information, which have been analyzed under Step 2A, Prong One reciting a process for obtaining/collecting patient data/information from different sources, comparing and analyzing risk level of the data, monitoring access events to the patient information and generating a suspicious score for access and provide an alert, which are steps of associating/comparing/organizing/monitoring patient data along with observing, evaluating, judgment, and opinion that are citing a process for which can be performed by a human following instructions, but for the fact that the claims recite a general-purpose computer processor to implement the abstract idea for which both the instant claims and the abstract idea are defined as Certain Methods of Organizing Human Activity and Mental Process. Furthermore, as described in the final OA mailed 09/03/2025 response to remarks, that DDR Holdings did not disclose any non-conventional technology but were 101 patent eligible because they used known information in combination of conventional technology to present the information in a new form by combining contacts on a website which provides a solution rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks. In contrast, the instant claims do not recite any solution to a technical or provide a technological improvement similar to in DDR Holdings. On page 12-13 of the remarks, Applicant argues “As described in more detail below, Applicant respectfully submits that the claims are directed to improvements in computer technology under the same principles emphasized in Ex parte Desjardins. Applicant respectfully submits that the human mind is not equipped to at least, "automatically generat[e], stor[e]..., monitoring..., determine[e]..., generat[e] alerts... amended Claim 1 recites meaningful limitations to the claimed method, directly applicable to the problem and advantages presented... are directly applicable addressing the deficiencies in existing systems that detect breaches in patient data and contribute to the advantages described above and elsewhere in the application.” Examiner respectfully disagrees. The argument that the claims are directed to improvements in computer technology under the same principles emphasized in Ex parte Desjardins, Examiner assert first, the Appeals Review Panel (ARP) found the Desjardins claims to be directed to methods for training artificial intelligence/machine learning (AI) models and the claims improved the functioning of the computer itself, as such it is not “directed to” an abstract idea under Alice Step 1. Second, the improvement in Enfish, for example, provided an improvement to a computer function and/or technical field (self-pointing database) reciting a self-referential table for a computer database providing a particular improvement in the computer’s functionality that improves the way a computer stores and retrieves data in memory whereas the instant claim(s) and specifications do not recite an improvement to technology, as in Enfish, nor Desjardins, as appealed, but to performance of an abstract idea such as analyzing patient information and monitoring a suspicions access to the patient information while using well-known computing system and components. Moreover, while the claims, under BRI, recite steps such as monitoring, determining, generating warning that encompass a judicial exception to define the identified abstract idea, the claims recite additional elements that it amounts no more than adding the words "apply it" (or an equivalent) with the judicial exception, e.g., generating an entry, and storing in a database. The claim(s), as a whole, recites an abstract idea for identifying a high-profile patient and associating risk and nowhere the claims nor the specification recite a technical solution improving the functioning of a computer or improves another technology or technical field rather the claimed invention is describing a solution addressing an administrative activity for identifying high-profile patient entities faster than a manual classification process (Specification, [0020]). In addition, the cited specification [0025] in the Applicant remarks do not indicate an improvement to a computer system nor a addressing a technical issue rather describing a standard behavior of any computing system according to data processing and will not cause the claim to be "directed to" something other than an abstract idea. As mentioned above, the Applicant's invention is to improve the abstract idea of process to identify and monitor patient entities with high risk levels, through leveraging computing technology, e.g., processor, machine learning model, in a well understood manner. However, improving upon an abstract idea does not make the abstract idea any less abstract. As discussed in the rejection above, the components of the instant system, when taken alone, each execute in a manner conventionally expected of these components. At best, Applicant has claimed features that may improve an abstract idea. However, an improved abstract idea is still abstract, (SAP America v. Investpic *2-3 ("'We may assume that the techniques claimed are "groundbreaking, innovative, or even brilliant," but that is not enough for eligibility. Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. See Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 89-90 (2012); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) ("[A] claim for a new abstract idea is still an abstract idea”. On page 14-15 of the remarks, Applicant argues “However, even if the claims were found to involve an abstract idea, which they should not be, the claims are directed to "significantly more" than an abstract idea and thus comply with Section 101. Further, as noted above and emphasized in Ex parte Desjardins, improvements to technology, including to computer technology, can be accomplished through software improvements. Accordingly, improvements to a computer can be accomplished even when generic computer components are used... These limitations are not "incidental to the primary process" of Claim 1, reciting more than extra-solution activity. MPEP 2106.05(g). Rather, they are significant to the claimed method "for identifying breaches of patient data", Examiner respectfully disagrees. Examiner described above that the instant claims are not analogous to Ex parte Desjardins and/or Enfish, and do not describe any improvement to a computing system or components or technological field. In light of the Alice decision and the guidance provided in the 2019 PEG, the features listed in the claims, are not considered an improvement to another technology or technical field, or an improvement to the functioning of the computer itself. At best, these features may be considered to be an organizational issue solving managing personal information, using computers. The alleged benefits that Applicants tout such as are due to administrative decisions, using computers, rather than any improvement to another technology or technical field, or an improvement to the functioning of the computer itself. By relying on computing devices to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible (See Alice, 134 S. Ct. at 2359 "use of a computer to create electronic records, track multiple transactions, and issue simultaneous instructions" is not an inventive concept). There is a fundamental difference between computer functionality improvements, on the one hand, and uses of existing computers as tools to perform a particular task, on the other. There is nothing, for example, in the pending claims to suggest that the claimed "processor", "machine-learning model" applied by said processors and "computing device" are somehow made more efficient or that the manner in which these elements carry out their basic functions is otherwise improved in any way. The alleged advantages that Applicants tout do not concern an improvement in computer capabilities but instead relate to an alleged improvement in determining high-profile patient information risk, for which a computer is used as a tool in its ordinary capacity. Therefore, the Applicant argument is found to be unpersuasive and Examiner remains the 101 rejections of claims which have been updated to address Applicant's argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAAELDIN ELSHAER whose telephone number is (571)272-8284. The examiner can normally be reached M-Th 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MAMON OBEID can be reached at Mamon.Obeid@USPTO.GOV. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAAELDIN M. ELSHAER/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Aug 07, 2023
Application Filed
Oct 13, 2023
Non-Final Rejection — §101, §102, §112
Dec 28, 2023
Interview Requested
Jan 05, 2024
Examiner Interview Summary
Jan 05, 2024
Applicant Interview (Telephonic)
Jan 17, 2024
Response Filed
Jan 24, 2024
Final Rejection — §101, §102, §112
May 15, 2024
Response after Non-Final Action
May 20, 2024
Response after Non-Final Action
May 20, 2024
Examiner Interview (Telephonic)
May 28, 2024
Request for Continued Examination
May 29, 2024
Response after Non-Final Action
Jun 18, 2024
Non-Final Rejection — §101, §102, §112
Sep 24, 2024
Response Filed
Oct 11, 2024
Final Rejection — §101, §102, §112
Dec 16, 2024
Request for Continued Examination
Dec 17, 2024
Response after Non-Final Action
Jan 28, 2025
Non-Final Rejection — §101, §102, §112
Jul 23, 2025
Examiner Interview Summary
Jul 23, 2025
Applicant Interview (Telephonic)
Jul 31, 2025
Response Filed
Aug 29, 2025
Final Rejection — §101, §102, §112
Mar 03, 2026
Request for Continued Examination
Mar 19, 2026
Response after Non-Final Action
Mar 26, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592315
APPARATUS, SYSTEM, METHOD, AND COMPUTER-READABLE RECORDING MEDIUM FOR DISPLAYING TRANSPORT INDICATORS ON A PHYSIOLOGICAL MONITORING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12537083
SYSTEMS AND METHODS FOR REGULATING PROVISION OF MESSAGES WITH CONTENT FROM DISPARATE SOURCES BASED ON RISK AND FEEDBACK DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12525337
METHOD AND APPARATUS FOR SELECTING MEDICAL DATA FOR ANNOTATION
2y 5m to grant Granted Jan 13, 2026
Patent 12499999
SYSTEMS AND METHODS FOR TARGETED MEDICAL DOCUMENT REVIEW
2y 5m to grant Granted Dec 16, 2025
Patent 12424338
TRANSFER LEARNING TECHNIQUES FOR USING PREDICTIVE DIAGNOSIS MACHINE LEARNING MODELS TO GENERATE TELEHEALTH VISIT RECOMMENDATION SCORES
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
36%
Grant Probability
67%
With Interview (+31.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month