Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,833

WORKLIST PRIORITIZATION USING NON-PATIENT DATA FOR URGENCY ESTIMATION

Non-Final OA §101§103
Filed
May 12, 2023
Examiner
JACKSON, JORDAN L
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Koninklijke Philips N V
OA Round
3 (Non-Final)
40%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
79%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
72 granted / 179 resolved
-27.8% vs TC avg
Strong +39% interview lift
Without
With
+38.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
216
Total Applications
across all art units

Statute-Specific Performance

§101
38.9%
-1.1% vs TC avg
§103
33.8%
-6.2% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 179 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02 March 2026 has been entered. Formal Matters Applicant's response, filed 02 March 2026, has been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Status of Claims Claims 1-2, 5, 8-12, 15, and 18-23 are currently pending and have been examined. Claims 1, 11, and 20 have been amended. Claims 21-23 have been added. Claims 1-2, 5, 8-12, 15, and 18-23 have been rejected. Priority The instant application claims the benefit of priority under 35 U.S.C 119(e) or under 35 U.S.C. § 120, 121, or 365(c). Accordingly, the effective filing date for the instant application is 17 November 2020 claiming benefit to Provisional Application 63/114,741. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 5, 8-12, 15, and 18-23 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e. a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 – Statutory Categories of Invention: Claims 1-2, 5, 8-12, 15, and 18-23 are drawn to a method, system, or manufacture, which are statutory categories of invention. Step 2A – Judicial Exception Analysis, Prong 1: Independent claim 1 recites a method for generating a prioritized worklist of unread image studies based on a predicted urgency for each of the plurality of unread image studies in part performing the steps of providing a [deep learning neural network] trained to predict a classification of findings for an unread image study to derive both an urgency score for reading of the unread image study therefrom and a radiological reading parameter for the unread image study comprising: (i) an estimated review time, wherein the [deep learning neural network] is trained using training data comprising a plurality of previously read image studies, the previously read image studies including a classification of findings, radiologist-specific data, and patient data, the patient data including one or more of a patient's age, gender, symptoms, and co-morbidities; and (ii) a plurality of read image studies comprising both a respective urgency score generated by an initially trained deep learning network and a respective user-generated urgency score; applying the [trained deep learning network] to each of the plurality of unread image studies to generate an urgency score predicting an urgency for the reading of the unread image study and a radiological reading parameter for the unread image study comprising an estimated review time; and generating a prioritized worklist for the plurality of unread image studies based on the generated urgency score for each of the plurality of unread image studies. Independent claim 11 recites a system for generating a prioritized worklist of unread image studies based on a predicted urgency for each of the plurality of unread image studies in part performing the same abstract idea identified in Independent claim 1. Independent claim 20 recites a non-transitory computer-readable storage medium in part performing the steps of the same abstract idea identified in Independent claim 1. These steps of collecting unread imaging data and triaging the images to distribute to different radiologist based on a mathematical algorithm amount to methods of organizing human activity which includes functions relating to interpersonal and intrapersonal activities, such as managing relationships or transactions between people, social activities, and human behavior (MPEP § 2106.04(a)(2)(II)(C) citing the abstract idea grouping for methods of organizing human activity for managing personal behavior or relationships or interactions between people – also note MPEP § 2106.04(a)(2)(II) stating certain activity between a person and a computer may fall within the “certain methods of organizing human activity” grouping). Examiner notes that while not positively claimed (here the trained model is merely provided after being previously trained), the steps of training a neural network to perform the task of the abstract idea would amount to a mathematical concept and are not subject matter eligible in light of the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence. The use of a computer to train a generic neural network utilizing the training embodiments offered in the instant specification (see at least ¶ 0020-23 describing the mathematical equation for training the neural network) amount to applying data to an algorithm and report the results (MPEP § 2106.05(f)(2) see case involving a commonplace business method or mathematical algorithm being applied on a general purpose computer within the “Other examples.. i.”) consistent with Example 47 claim 2. The techniques outlined, and Examiner notes the known methods of training to one of ordinary skill in the art, are mathematical algorithms or mental processes of labeling and fitting data to a particular model representation. Examiner notes that the deep learning neural network indicated in brackets above may be replaced with a “model” or “algorithm” in Step 2A Prong 1, and the deep learning neural network is also analyzed under step 2A and step 2B as an additional element in pursuit of compact prosecution. Dependent claims 2 and 12 recite, in part, wherein the radiologist-specific data includes urgency scores for the previously read image studies so that the deep learning neural network is trained to directly predict the urgency score for reading of the unread image study. Dependent claims 5 and 15 recite, in part, wherein the radiologist-specific data includes one of a duration of reading time of the previously read image study, a radiologist specialty, and whether a viewing tool was used via the radiologist during a reading of the previously read image study. Dependent claims 8 and 18 recite, in part, distributing each of the unread image studies to one of a plurality of users based on the predicted urgency. Dependent claims 9 and 19 recite, in part, distributing each of the unread image studies to one of a plurality of users based on the predicted urgency. Dependent claims 21-23 recite, in part, wherein generating a prioritized worklist for the plurality of unread image studies is further based on the generated radiological reading parameter for the plurality of unread image studies. Each of these steps of the preceding dependent claims only serve to further limit or specify the features of independent claims 1, 11, or 20 accordingly, and hence are nonetheless directed towards fundamentally the same abstract idea as the independent claim and utilize the additional elements analyzed below in the expected manner. Step 2A – Judicial Exception Analysis, Prong 2: This judicial exception is not integrated into a practical application because the additional elements within the claims only amount to instructions to implement the judicial exception using a computer [MPEP 2106.05(f)]. Claim 1 recites a computer. Claim 11 recites a non-transitory computer readable storage medium storing an executable program; and a processor executing the executable program. Claim 20 recites a non-transitory computer-readable storage medium including a set of instructions executable by a processor. The specification does recite any specific structure for the computer, memory, processor, or storage medium. The computer and related hardware only serve as a tool to apply data to an algorithm and report the results (MPEP § 2106.05(f)(2) see case involving a commonplace business method or mathematical algorithm being applied on a general purpose computer within the “Other examples.. i.”) amounting to instruction to implement the abstract idea using a general purpose computer. Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014). Claims 1, 11, and 20 recite a deep learning neural network. The specification provides the known method for training such model - see at least ¶ 0020-23 describing the mathematical equation for training the neural network. The use of a deep learning neural network is merely recited as a tool to apply data to an algorithm and report the results (MPEP § 2106.05(f)(2) see case involving a commonplace business method or mathematical algorithm being applied on a general purpose computer within the “Other examples.. i.”) amounting to instruction to implement the abstract idea using a general purpose computer. Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357 (2014). Claims 1, 11, and 20 recite receiving a plurality of unread image studies. The limitations are only recited as a tool which only serves to input data for use by the abstract idea (MPEP § 2106.05(g) - insignificant pre/post-solution activity that amounts to mere data gathering to obtain input) and is therefore not a practical application of the recited judicial exception. Claims 10 and 19 recite storing results of a reading of the unread image study to a training database for continued training of the deep learning neural network. Storing results in a database only serves as extra solution activities incidental to the primary process that is merely a nominal or tangential addition to the claim (MPEP § 2106.05(g) - insignificant pre/post-solution activity) and is therefore not a practical application of the recited judicial exception. The above claims, as a whole, are therefore directed to an abstract idea. Step 2B – Additional Elements that Amount to Significantly More: The present claims do not include additional elements that are sufficient to amount to more than the abstract idea because the additional elements or combination of elements amount to no more than a recitation of instructions to implement the abstract idea on a computer. Claim 1 recites a computer. Claim 11 recites a non-transitory computer readable storage medium storing an executable program; and a processor executing the executable program. Claim 20 recites a non-transitory computer-readable storage medium including a set of instructions executable by a processor. Claims 1, 11, and 20 recite a deep learning neural network. Each of these elements is only recited as a tool for performing steps of the abstract idea, such as the use of the storage mediums to store data, the computer and data processing devices to apply the algorithm, and the display device to display selected results of the algorithm. These additional elements therefore only amount to mere instructions to perform the abstract idea using a computer and are not sufficient to amount to significantly more than the abstract idea (MPEP 2016.05(f) see for additional guidance on the “mere instructions to apply an exception”). Each additional element under Step 2A, Prong 2 is analyzed in light of the specification’s explanation of the additional element’s structure. The claimed invention’s additional elements do not have sufficient structure in the specification to be considered a not well-understood, routine, and conventional use of generic computer components. Note that the specification can support the conventionality of generic computer components if “the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a)” (MPEP § 2106.07(a)(III)(A) integrating the evidentiary requirements in making a § 101 rejection as established in Berkheimer in III. Impact on Examination Procedure, A. Formulating Rejections, 1. on p. 3). Claims 1, 11, and 20 recite receiving a plurality of unread image studies. The courts have decided that receiving or transmitting data over a network as well-understood, routine, conventional activity when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (MPEP § 2106.05(d)(II) other types of activities example i. receiving or transmitting data over a network, OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Claims 10 and 19 recite storing results of a reading of the unread image study to a training database for continued training of the deep learning neural network. The courts have decided that storing and retrieving information in memory as well-understood, routine, conventional activity as a computer function when claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (MPEP § 2106.05(d)(II)). Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. Their collective functions merely provide conventional computer implementation. Claims 1-2, 5, 8-12, 15, and 18-23 are therefore rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 8-12, 15, and 18-23 are rejected under 35 U.S.C. 103 as being unpatentable over Annarumma et al., Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks, 291 Radiology 196-202 (2019)[hereinafter Annarumma] in view of Baltruschat et al., Smart Chest X-ray Worklist Prioritization using Artificial Intelligence: A Clinical Work ow Simulation, eprint arXiv:2001.08625 (June 18, 2020)[hereinafter Baltruschat]. As per claim 1, Annarumma teaches on the following limitations of the claim: a computer-implemented method for generating a prioritized worklist of unread image studies based on a predicted urgency for each of the plurality of unread image studies, the method comprising: is taught in the Automated Image Prioritization: Simulation Study on p. 197 and § NLP-generated Radiograph Annotation and Labeling on p. 197 (teaching on a computer implemented artificial intelligence system applying a deep convolutional neural network for radiology imaging workflow triage based on a priority level); providing a deep learning neural network trained to predict a classification of findings for an unread image study to derive both an urgency score for reading of the unread image study therefrom ..., wherein the deep learning neural network is trained using training data comprising is taught in the § Automated Image Prioritization: Simulation Study on p. 197, § NLP-generated Radiograph Annotation and Labeling on p. 197, Table 1 on p. 199, and § Deep Learning Architecture for Criticality Prediction from Image Data on p. 197 (teaching on training the deep convolutional neural network for determining the image priority level for new unreviewed images wherein the CNN processes and classifies unreviewed images to determine the predicted priority level wherein the processing includes utilizing computer vision to determine a radiologic label (treated as synonymous to a classification of findings and radiological reading parameters)); (i) a plurality of previously read image studies, the previously read image studies including a classification of findings,..., and patient data, the patient data including one or more of a patient's age, gender, symptoms, and co-morbidities; and is taught in the § NLP-generated Radiograph Annotation and Labeling on p. 197, Table 1 on p. 199, and § Deep Learning Architecture for Criticality Prediction from Image Data on p. 197 (teaching on a training data set including labeled radiographs wherein the data set was based on patient age, image type, radiologic label (treated as synonymous to a classification of findings), and a priority level); (ii) a plurality of read image studies comprising both a respective urgency score generated by an initially trained deep learning network and a respective user-generated urgency score is taught in the § NLP-generated Radiograph Annotation and Labeling on p. 197, Table 1 on p. 199, and § Deep Learning Architecture for Criticality Prediction from Image Data on p. 197 (teaching on training the deep convolutional neural network for determining the image priority level for new unreviewed images wherein the CNN processes and classifies unreviewed images to determine the predicted priority level from training data set including labeled radiographs wherein the data set was based on patient age, image type, radiologic label (treated as synonymous to a classification of findings), and a priority level label); receiving a plurality of unread image studies; applying the trained deep learning network to each of the plurality of unread image studies to generate an urgency score predicting an urgency for the reading of the unread image study is taught in the § NLP-generated Radiograph Annotation and Labeling on p. 197, Table 1 on p. 199, and § Deep Learning Architecture for Criticality Prediction from Image Data on p. 197 (teaching on the CNN processing and classifying unreviewed images to determine the predicted priority level wherein the processing includes utilizing computer vision to determine a radiologic label (treated as synonymous to a classification of findings and radiological reading parameters)); -AND- generating a prioritized worklist for the plurality of unread image studies based on the generated urgency score for each of the plurality of unread image studies is taught in the Automated Image Prioritization: Simulation Study on p. 197 and § NLP-generated Radiograph Annotation and Labeling on p. 197 (teaching on applying a deep convolutional neural network for radiology imaging workflow triage based on the determined priority level). Annarumma fails to teach the following limitation of claim 1. Baltruschat, however, does teach the following: and a radiology reading parameter for the unread image study comprising an estimated review time is taught in the § 3.2. CXR Generation and Reporting Time Analysis on p. 7 and § 3.5. Workflow Simulations and Figure 5 on p. 8-9 (teaching on considering a radiologist's reporting speed (RTAT) in a CNN model when creating a unread report distribution prioritization to reduce and report the average RTAT); radiologist-specific data is taught in the § 3.2. CXR Generation and Reporting Time Analysis on p. 7 (teaching on considering a radiologist's reporting speed when creating a unread report distribution prioritization); -AND- and a radiology reading parameter for the unread image study comprising an estimated review time is taught in the § 3.2. CXR Generation and Reporting Time Analysis on p. 7 and § 3.5. Workflow Simulations and Figure 5 on p. 8-9 (teaching on considering a radiologist's reporting speed (RTAT) in a CNN model when creating a unread report distribution prioritization to reduce and report the average RTAT). One of ordinary skill in the art before the effective filing date would combine the workflow optimization algorithm of Annarumma to include considering historical radiologist examination time of Baltruschat with the motivation of “analyz[ing] the current workflow in a radiology department”…”including all effects, such as different patient frequency during day and night” (Baltruschat in the § 2.3 Workflow Simulation on p. 3-5). Claims 11 and 20 are rejected under the same rational. As per claim 2, the combination of Annarumma and Baltruschat discloses all of the limitations of claim 1. Annarumma also discloses the following: the method of claim 1, wherein the radiologist-specific data includes urgency scores for the previously read image studies so that the deep learning neural network is trained to directly predict the urgency score for reading of the unread image study. is taught in the § NLP-generated Radiograph Annotation and Labeling on p. 197, Table 1 on p. 199, and § Deep Learning Architecture for Criticality Prediction from Image Data on p. 197 (teaching on the training data set including labeled radiographs including a determined priority level (treated as synonymous to radiologist-specific data)). Claim 12 is rejected under the same rational. As per claim 5, the combination of Annarumma and Baltruschat discloses all of the limitations of claim 1. Annarumma fails to teach the following; Baltruschat, however, does disclose: the method of claim 1, wherein the radiologist-specific data includes one of a duration of reading time of the previously read image study, a radiologist specialty, and whether a viewing tool was used via the radiologist during a reading of the previously read image study is taught in the § 3.2. CXR Generation and Reporting Time Analysis on p. 7 (teaching on considering a radiologist's reporting speed when creating a unread report distribution prioritization). One of ordinary skill in the art before the effective filing date would combine the workflow optimization algorithm of Annarumma to include considering historical radiologist examination time of Baltruschat with the motivation of “analyz[ing] the current workflow in a radiology department”…”including all effects, such as different patient frequency during day and night” (Baltruschat in the § 2.3 Workflow Simulation on p. 3-5). Claim 15 is rejected under the same rational. As per claim 8, the combination of Annarumma and Baltruschat discloses all of the limitations of claim 1. Annarumma also discloses the following: the method of claim 1, further comprising: distributing each of the unread image studies to one of a plurality of users based on the predicted urgency is taught in the § Automated Image Prioritization: Simulation Study on p. 197 (teaching on inserting the unreviewed image into a “dynamic re-porting queue” on the basis of its predicted urgency and the waiting time of other already queued radiographs). Claim 18 is rejected under the same rational. As per claim 9, the combination of Annarumma and Baltruschat discloses all of the limitations of claim 8. Annarumma also discloses the following: the method of claim 8, wherein distributing each of the unread image studies is further based on one of predicted classification of findings and a predicted radiological reading parameters is taught in the § NLP-generated Radiograph Annotation and Labeling on p. 197, Table 1 on p. 199, and § Automated Image Prioritization: Simulation Study on p. 197-198 (teaching on the CNN processing and classifying unreviewed images to determine the predicted priority level wherein the processing includes utilizing computer vision to determine a radiologic label (treated as synonymous to a classification of findings and radiological reading parameters) and a predetermined clinical noise parameter (treated as synonymous to a predicted radiological reading parameter as the noise parameter represents an external clinical effect on the read order)). Claim 19 is rejected under the same rational. As per claim 10, the combination of Annarumma and Baltruschat discloses all of the limitations of claim 1. Annarumma also discloses the following: the method of claim 1, further comprising: storing results of a reading of the unread image study to a training database for continued training of the deep learning neural network is taught in the § Data Set on p. 197 (teaching on maintaining a set of labeled training data to further test the validity of the model in the future). As per claim 21, the combination of Annarumma and Baltruschat discloses all of the limitations of claim 1. Annarumma fails to teach the following; Baltruschat, however, does disclose: the computer-implemented method of claim 1, wherein generating a prioritized worklist for the plurality of unread image studies is further based on the generated radiological reading parameter for the plurality of unread image studies is taught in the § 3.2. CXR Generation and Reporting Time Analysis on p. 7 and § 3.5. Workflow Simulations and Figure 5 on p. 8-9 (teaching on considering a radiologist's reporting speed (RTAT) in a CNN model when creating an unread report distribution prioritization to reduce and report the average RTAT). One of ordinary skill in the art before the effective filing date would combine the workflow optimization algorithm of Annarumma to include considering historical radiologist examination time of Baltruschat with the motivation of “analyz[ing] the current workflow in a radiology department”…”including all effects, such as different patient frequency during day and night” (Baltruschat in the § 2.3 Workflow Simulation on p. 3-5). Claims 22 and 23 are rejected under the same rational. Response to Arguments Applicant's arguments filed 02 March 2026 with respect to 35 USC § 101 have been fully considered but they are not persuasive. Applicant first asserts the claims as a whole are not directed towards certain methods of organizing human activity citing the lack of similar examples as evidence and distinguishing the instant claims from a rule based instructions of In re Marco Guldenaar Holding B.V. and other case examples. Examiner disagrees. The claims recite a method of instructions a radiology triage user would follow to create a workflow list for a radiologist similar to iii. a mental process that a neurologist should follow when testing a patient for nervous system malfunctions, In re Meyer, 688 F.2d 789, 791-93, 215 USPQ 193, 194-96 (CCPA 1982), i.e. following a mental process to determine the urgency of a medical image. Next, Applicant asserts performing the claimed functions would be physically and mentally impossible for a human, stating “how, for example, can a human follow instructions to prioritize image studies based on their content, without reading those image studies?”. Examiner first notes there is no “practical performance” standard under the abstract idea methods of organizing human activity subgrouping as opposed to the mental process subgrouping. However, Examiner also notes, a human would be perfectly capable of providing a brief preliminary reading of a medical image with corresponding data points to determine an urgency score. The “blind or blindfoldeded” assertion is nonsensical – the claimed invention does not require the computer to create an urgency score without “seeing” the image. Merely performing certain actions by a general purpose computer does not disqualify a claim from reciting an abstract idea - the automation of a manual process is not enough to overcome a subject matter eligibility rejection (MPEP § 2106.05(a)(I) Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality no. (iii) mere automation of manual processes). Applicant then asserts that the claim does not recite a mathematical concept. Examiner notes this entire line of argument is irrelevant to the cited rejection. Examiner has not rejected the claim under the mathematical concept (or mental process) subgrouping. Examiner’s explanation that mere training of a neural network would not change the analysis was provided as clarity to Applicant in effort to promote compact prosecution. As the instant claims merely apply a trained algorithm, Examiner has in no way relied on said subgrouping in the rejection. Applicant asserts that under Step 2A Prong 2, the application of a machine learning model requires more than a general purpose computer and thus is incorrectly characterized by Examiner as “apply it”. While the claims are read in light of the specification, Examiner notes that is no evidentiary support in the instant specification that a special purpose computer or corresponding hardware is necessary to perform the steps of the instant claims. Therefore, these arguments are incommensurate with the scope of the disclosure and claims. Next, Applicant asserts that machine learning algorithms in general provide a practical application via increased precision, accuracy, consistency, and efficiency. Examiner is not persuaded; efficiency is not enough to amount to a practical application via an improvement to computer or technology under Step 2A Prong 2 (see MPEP § 2106.05(a)(I) examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: ii. accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016)) (also see MPEP § 2106.05(f)(2) stating “"claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not provide an inventive concept (Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367 (Fed. Cir. 2015)”), and, thus, the combination of the generic computer components do not provide a non-conventional and non-generic arrangement of known, conventional pieces; note this is applied to Step 2B as well as Step 2A Prong 2). Applicant's arguments filed 02 March 2026 with respect to 35 USC § 103 have been fully considered but they are not persuasive. Applicant asserts that Annarumma fails to teach on considering the radiologist read time when creating a priority worklist, but is silent regarding the teachings of Baltruschat. Therefore, as Examiner has relied on Baltruschat to teach on the amended attributes of the instant claims, Examiner has sustained the rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN LYNN JACKSON whose telephone number is (571)272-5389. The examiner can normally be reached Monday-Friday 8:30AM-4:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen M Vazquez can be reached at 571-272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORDAN L JACKSON/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Jun 13, 2025
Non-Final Rejection — §101, §103
Sep 18, 2025
Response Filed
Dec 04, 2025
Final Rejection — §101, §103
Jan 29, 2026
Interview Requested
Feb 05, 2026
Applicant Interview (Telephonic)
Feb 05, 2026
Examiner Interview Summary
Mar 02, 2026
Request for Continued Examination
Mar 04, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586685
MULTIMODAL MACHINE LEARNING BASED CLINICAL PREDICTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12562250
PHARMACY PREDICTIVE ANALYTICS
2y 5m to grant Granted Feb 24, 2026
Patent 12469594
PREDICTIVE WORK ORDER DEVICES, SYSTEMS, AND METHODS
2y 5m to grant Granted Nov 11, 2025
Patent 12456545
Systems and Methods for Providing Professional Treatment Guidance for Diabetes Patients
2y 5m to grant Granted Oct 28, 2025
Patent 12456550
SYSTEMS AND METHODS FOR REMOTE PATIENT MONITORING
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
40%
Grant Probability
79%
With Interview (+38.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 179 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month