Prosecution Insights
Last updated: April 19, 2026
Application No. 18/705,234

METHODS AND SYSTEMS FOR AUTOMATED ANALYSIS OF MEDICAL IMAGES WITH INJECTION OF CLINICAL RANKING

Non-Final OA §101§103
Filed
Apr 26, 2024
Examiner
WINSTON III, EDWARD B
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Annalise-Ai Pty Ltd.
OA Round
1 (Non-Final)
20%
Grant Probability
At Risk
1-2
OA Rounds
4y 11m
To Grant
52%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
74 granted / 370 resolved
-32.0% vs TC avg
Strong +32% interview lift
Without
With
+31.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
35 currently pending
Career history
405
Total Applications
across all art units

Statute-Specific Performance

§101
37.1%
-2.9% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 370 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Based upon consideration of all of the relevant factors with respect to the claims as a whole, the claims are directed to non-statutory subject matter which do not include additional elements that are sufficient to amount to significantly more than the judicial exception because of the following analysis: Independent Claim(s) 1 and 12-13 are directed to an abstract idea consisting of methods and system claims directed to the abstract idea of collecting medical image information, categorizing it, assigning priority rankings, and using those rankings to reorder a worklist. It merely involves organizing, classifying, and prioritizing information and communicating the results. Independent Claim 1 recites “providing a plurality of visual findings in one or more anatomical images of a subject, wherein a subset of the generated visual findings represents a set of priority findings; providing a first classification list for the plurality of visual findings, to associate a first clinical ranking to the set of priority findings, respectively; providing a second classification list wherein the second classification list is user configurable; assigning a clinical ranking to the set of priority findings, respectively, using at least one of the first and second classification lists; combining the set of priority findings and their respectively assigned clinical ranking to form triage data; and communicating the triage data to a user system configured to process the triage data and to generate, using the triage data, an output that represents a re-ordered worklist.” Independent Claim 12 recites “transmitting triage data for a plurality of visual findings in one or more anatomical images of a subject, wherein the plurality of visual findings are generated using a convolutional neural network, CNN, component of a neural network.” Independent Claim 13 recites “instructions to execute the method.” The limitations of Claims 1 and 13-16, as drafted, under its broadest reasonable interpretation, covers the performance of a Mental Process which are concepts performed in the human mind (including an observation, evaluation, judgment, opinion), but for the recitation of generic computer components. That is, other than reciting, “user system, convolutional neural network, processor, computer readable storage medium” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “system” language, “providing” in the context of this claim encompasses the user manually retrieving a plurality of visual findings in one or more anatomical images of a subject. Similarly, the providing a first and/or second classification list for the plurality of visual findings, covers performance of the limitation in the mind, but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of using a “user system, convolutional neural network, processor, computer readable storage medium” to perform all of the “obtaining, transforming, parsing, determining, transforming, selecting and storing” steps. The “user system, convolutional neural network, processor, computer readable storage medium” is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) of executing computer-executable instructions for implementing the specified logical function(s) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Claim 1 has the following additional elements (i.e., user system, convolutional neural network). Claim 12 has the following additional elements (i.e., processor, computer readable storage medium). Claim 13 has the following additional elements (i.e., processor, computer readable storage medium). Looking to the specification, these components are described at a high level of generality (Page 18 || 25-26; Computing systems may include conventional personal computer architectures, or other general-purpose hardware platforms). The use of a general-purpose computer, taken alone, does not impose any meaningful limitation on the computer implementation of the abstract idea, so it does not amount to significantly more than the abstract idea. Also, although the claims add “[storage]” steps, it is only considered as insignificant extrasolution activity. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. The combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology and their collective functions merely provide a conventional computer implementation of the abstract idea. Furthermore, the additional elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of generally linking the abstract idea to a particular technological environment or field of use, as the courts have found in Parker v. Flook. Therefore, there are no limitations in the claims that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception. It is worth noting that the above analysis already encompasses each of the current dependent claims (i.e., claims 2-11). Particularly, each of the dependent claims also fails to amount to “significantly more’ than the abstract idea since each dependent claim is directed to a further abstract idea, and/or a further conventional computer element/function utilized to facilitate the abstract idea. Accordingly, none of the current claims implements an element—or a combination of elements—directed to an inventive concept (e.g., none of the current claims is reciting an element—or a combination of elements—that provides a technological improvement over the existing/conventional technology). These information characteristics do not change the fundamental analogy to the abstract idea grouping of “Mental Processes,” and, when viewed individually or as a whole, they do not add anything substantial beyond the abstract idea. Furthermore, the combination of elements does not indicate a significant improvement to the functioning of a computer or any other technology. Therefore, the claims when taken as a whole are ineligible for the same reasons as the independent claims. Claims 1-13 are therefore not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over WO 2021067624 A1 to PAIK in view of Pub. No.: US 20190340753 A1 to Brestel et al. As per Claim 1, PAIK teaches a method comprising the steps of: --- providing a plurality of visual findings in one or more anatomical images of a subject, wherein the plurality of visual findings is generated using a convolutional neural network, CNN, component of a neural network, wherein a subset of the generated visual findings represents a set of priority findings (see PAIK paragraph 222; Described herein are systems, software, and methods for facilitating AI-assisted interpretation and reporting of medical images. In some cases, the systems, software, and methods comprise identifying/labeling and/or predicting a diagnosis or severity of medical disorders, conditions, or symptoms (e.g., a pathology). As used herein, pathology may be used to refer to a medical disorder, condition, or symptom, for example, a fracture or disc herniation. In some cases, provided herein is the identification and severity prediction of a handful of orthopedic related findings across different anatomies including, but not limited to, the spine, knee, shoulder, and hip. The relevant findings for the given anatomical area can be generated using a system of machine learning algorithms or models for carrying out image analysis (e.g., image segmentation and/or labeling) such as convolutional neural networks. A neural network system can perform image analysis using a process comprising two main phases. The first phase (optional) is a first machine learning algorithm or model such as a first neural network that is responsible for segmenting out the relevant anatomical structures for the given larger anatomy (see, e.g., FIG. 18), which can be referred to as the segmentation step. The second phase is a second machine learning algorithm or model such as a second neural network that is responsible for taking a region of interest in the given anatomy (e.g., a region identified by the first network) and predicting the presence of one or more findings and optionally their relative severity (see, e.g., FIG. 19), which can be referred to as the findings step); -- providing a first classification list for the plurality of visual findings, to associate a first clinical ranking to the set of priority findings, respectively; providing a second classification list wherein the second classification list is user configurable; assigning a clinical ranking to the set of priority findings, respectively, using at least one of the first and second classification lists (PAIK paragraph 121; Accordingly, an AI vision system or module as disclosed herein can comprise one or more algorithms or models that provide for numerous possible findings that could be returned at any given point in the image. For example, a given point in the image would have a map of probabilities (or probability-like quantities) for each possible finding. Given the point specified by the user, the AI system can return a rank-ordered list of possible findings in decreasing order. The list may be truncated at a given probability level or length. With a verbal "yes" or "no" command or by clicking on the appropriate button, the user is able to choose whether or not the AI system should automatically generate the text for this finding and insert it into the report. In some cases, the systems, software, and methods disclosed herein are augmented with content- based image retrieval (CBIR) functionality such that the region being queried can be used to find similar images within a pre-populated database with corresponding findings or diagnoses. Then, the user would be able to determine by visual similarity which of the CBIR results matches the current query and as above, the report text would be automatically generated and inserted into the report. For example, the retrieval of similar images labeled with the true finding may help the user decide what the finding under consideration actually is). PAIK fails to explicitly teach: -- combining the set of priority findings and their respectively assigned clinical ranking to form triage data; and -- communicating the triage data to a user system configured to process the triage data and to generate, using the triage data, an output that represents a re-ordered worklist. Brestel et al. teaches anatomical images are arranged into a worklist 358 (corresponding to priority list 222B described with reference to FIG. 2) according to the computed likelihood of visual finding type, for example, ranked in decreasing order according to a ranking score computed based on the computed likelihood of visual finding. For example, ranked according to probability of the respective x-ray depicting the visual finding type indicative of pneumothorax. The anatomical images are accessed for manual review according to worklist 358 by a healthcare provider (e.g., hospital worker, radiologist, clinician) via a client terminal 308. The anatomical images are triaged for review by the healthcare provider according to the most urgent cases, most likely to include the visual finding, for example, pneumothorax, enabling rapid diagnosis and treatment of the acute cases (see Brestel et al. paragraph 84). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include systems/methods as taught by reference Brestel et al. within the systems/methods as taught by reference PAIK with the motivation of improvement provided by at least some of the systems, methods, apparatus, and/or code instructions which may include a reduction in the amount of time for alerting a user (e.g., treating physician) to the presence of a visual finding type in an anatomical image for rapid diagnosis and/or treatment thereof (see Brestel et al. paragraph 47). Examiner note: The claim recites identifying findings in anatomical images, assigning priority rankings using classification lists, and communicating triage information to a user system. These activities correspond to routine medical review and prioritization practices that can be performed mentally by a clinician. Implementing such known evaluation and triage procedures on a general-purpose computer represents no more than predictable automation using conventional computing functions (e.g., data input, ranking, and output/display) as taught by the prior art of PAIK and Brestel et al. Communicating the triage data to a user system is likewise equivalent to routine transmission or display of information. Accordingly, it is interpreted by Examiner that PAIK and Brestel et al. teach all of the features as claimed wherein the same obvious reasoning applies to the dependent claims. As per Claim 2, PAIK and Brestel et al. teach a method according to claim 1, wherein the second classification list takes precedence over the first classification list (see PAIK paragraphs 121 and 222). As per Claim 3, PAIK and Brestel et al. teach a method according to claim 1, wherein the clinical ranking is assigned using both the first and second classification lists, assigning the clinical ranking comprises the steps of: -- obtaining a first ranking value for a priority finding using the first classification list, obtaining a second ranking value for said priority finding using the second classification list, and providing the second ranking value as the assigned clinical ranking only if the second ranking value is equal to or higher than the first ranking value (see PAIK paragraphs 121 and 222). As per Claim 4, PAIK and Brestel et al. teach a method according to claim 1, wherein the triage data comprises an indication of priority findings from the plurality of priority findings that correspond to only one of the group of: -- said priority findings, priority findings that have been assigned a clinical value, a priority finding that has been assigned a highest clinical ranking (see PAIK paragraphs 121 and 222). As per Claim 5, PAIK and Brestel et al. teach a method according to claim 1, wherein the method further comprises the step of: -- using the triage data to update a user's worklist corresponding to the plurality of visual findings (see PAIK paragraphs 121 and 222). As per Claim 6, PAIK and Brestel et al. teach a method according to claim 5, wherein updating the user's worklist is configurable to use only one of the group of: -- said priority findings, priority findings that have been assigned a clinical value, a priority finding that has been assigned a highest clinical ranking (see PAIK paragraphs 121 and 222). As per Claim 9, PAIK and Brestel et al. teach a method according to claim 1, wherein the plurality of visual findings and associated first clinical ranking are provided by a server module to an integration layer module (see PAIK paragraphs 9 and 167; This intelligent worklist may be a component of a larger web-based system composed of all the tools a user needs to do their job (e.g., intelligent worklist management can be part of an overall system integrating any combination of the systems/subsystems and modules disclosed herein for various functions relating to image review, analysis, report generation, and management). As per Claim 11, PAIK and Brestel et al. teach a method according to claim 1, wherein the integration layer module comprises a database for storing the triage data for a period of time which is configurable by the user (see PAIK paragraphs 121 and 222). As per Claim 12, Claim 12 is directed to a system for transmitting triage data for a plurality of visual findings in one or more anatomical images of a subject, wherein the plurality of visual findings are generated using a convolutional neural network, CNN, component of a neural network, the system comprising: at least one processor; and at least one computer readable storage medium, accessible by the processor, comprising instructions that, when executed by the processor, cause the processor to execute a method according to any one of the preceding claim 1. Claim 12 recites the same or substantially similar limitations as those addressed above for Claim 1 as taught by PAIK and Brestel et al. Claim 12 is therefore rejected for the same reasons as set forth above for Claim 1 respectively. As per Claim 13, Claim 13 is directed to a non-transitory computer readable storage media comprising instructions that, when executed by at least one processor, cause the processor to execute a method according to claim 1. Claim 13 recites the same or substantially similar limitations as those addressed above for Claim 1 as taught by PAIK and Brestel et al. Claim 13 is therefore rejected for the same reasons as set forth above for Claim 1 respectively. Claim 7-8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over PAIK and Brestel et al. as applied to claim 1-6, 9 and 11-13 above, and further in view of Pub. No.: US 20190372978 A1 to Vendrell et al. As per Claim 7, PAIK and Brestel et al. fail to teach a method according to claim1, wherein the triage data is provided in a JavaScript Object Notation, JSON, format. Vendrell et al. teaches when the module receives such requests from user device web module's JavaScript, it translates them into hospital systems' native language (such as HL7) and relays them onto appropriate hospital system (see Vendrell et al. paragraph 328). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include systems/methods as taught by reference Vendrell et al. with the systems/methods as taught by references PAIK and Brestel et al. with the motivation of automatically requesting the presentation of secure data on a device that is able to present the secure data using the web-based interface on the first user device may improve productivity (e.g., by decreasing the number of steps utilized to access the secure data), increase accuracy, and/or reduce errors (see Vendrell et al. paragraph 66). As per Claim 8, PAIK and Brestel et al. and Vendrell et al. teach a method according to claim 7, wherein communicating the triage data comprises converting the triage data to a Health Level 7, HL7 format. Vendrell et al. teaches when the module receives such requests from user device web module's JavaScript, it translates them into hospital systems' native language (such as HL7) and relays them onto appropriate hospital system (see Vendrell et al. paragraph 328). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include systems/methods as taught by reference Vendrell et al. with the systems/methods as taught by references PAIK and Brestel et al. with the motivation of automatically requesting the presentation of secure data on a device that is able to present the secure data using the web-based interface on the first user device may improve productivity (e.g., by decreasing the number of steps utilized to access the secure data), increase accuracy, and/or reduce errors (see Vendrell et al. paragraph 66). As per Claim 10, PAIK and Brestel et al. and Vendrell et al. teach a method according to claim 9, wherein the visual findings and associated first clinical ranking are provided using a WebSockets protocol. WebSockets technology is currently being developed to address this issue but it is very immature and not universally supported (see Vendrell et al. paragraph 343). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include systems/methods as taught by reference Vendrell et al. with the systems/methods as taught by references PAIK and Brestel et al. with the motivation of automatically requesting the presentation of secure data on a device that is able to present the secure data using the web-based interface on the first user device may improve productivity (e.g., by decreasing the number of steps utilized to access the secure data), increase accuracy, and/or reduce errors (see Vendrell et al. paragraph 66). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pub. No.: US 20150142475 A1; The combination of servers allows for modular construction of monitoring from widgets that allow arbitrary display and graphing of data from arbitrary data streams. Modular construction of monitors is provided by using layouts and modular, re-useable widgets for physiological data and other metrics of health, which may or may not be calculated from other physiological data. The professional user, such as a physician, can define their own customized monitor using the modular construction of monitoring. This can be applied on a patient-by-patient basis. Embodiments provide for real-time transcoding of CRF data format to formats such as JSON, HTML, XML, TXT, and HL7 as well as other data formats as desired, and provide the ability to decode and encode the CRF format into various other formats. This allows HL7 integration into an internal data stream. HL7 messages can be decoded into the CRF format and be processed like every other data frame in the system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD B WINSTON III whose telephone number is (571)270-7780. The examiner can normally be reached M-F 1030 to 1830. Pub. No.: US 20220414865 A1; Systems and methods for determining a concordance between results of medical assessments are provided. Results of a medical assessment of a first type for an anatomical object of a patient and results of a medical assessment of a second type for the anatomical object are received. The results of the medical assessment of the first type are converted to a hemodynamic measure. A concordance analysis between the results of the medical assessment of the first type and the results of the medical assessment of the second type based on the hemodynamic measure is performed. Results of the concordance analysis are output. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached at (571) 272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.B.W/ Examiner, Art Unit 3683 /ROBERT W MORGAN/ Supervisory Patent Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Dec 25, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592309
AUTOMATED DETECTION OF LUNG CONDITIONS FOR MONITORING THORACIC PATIENTS UNDERTGOING EXTERNAL BEAM RADIATION THERAPY
2y 5m to grant Granted Mar 31, 2026
Patent 12548648
A METHOD OF TREATMENT OR PROPHYLAXIS
2y 5m to grant Granted Feb 10, 2026
Patent 12488878
Aligning Image Data of a Patient with Actual Views of the Patient Using an Optical Code Affixed to the Patient
2y 5m to grant Granted Dec 02, 2025
Patent 12205698
ADVISING DIABETES MEDICATIONS
2y 5m to grant Granted Jan 21, 2025
Patent 12046350
METHODS AND SYSTEMS FOR CALCULATING AN EDIBLE SCORE IN A DISPLAY INTERFACE
2y 5m to grant Granted Jul 23, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
20%
Grant Probability
52%
With Interview (+31.5%)
4y 11m
Median Time to Grant
Low
PTA Risk
Based on 370 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month