Prosecution Insights
Last updated: April 19, 2026
Application No. 19/023,907

TECHNIQUE FOR GENERATING A MEDICAL REPORT

Non-Final OA §101§102§103
Filed
Jan 16, 2025
Examiner
RASNIC, HUNTER J
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Deepc GmbH
OA Round
1 (Non-Final)
11%
Grant Probability
At Risk
1-2
OA Rounds
4y 7m
To Grant
32%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
9 granted / 81 resolved
-40.9% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
41 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
39.1%
-0.9% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 in parent application 18/031,648. Acknowledgement is made of applicant’s claim for foreign priority to 19 October 2020 under 35 U.S.C. 119(a)-(d). Status of Claims Claims 1-17 received on 16 January 2025 are currently pending and being considered by Examiner in this Office Action. Drawings The drawings are objected to because Figs. 4 & 7 are illegible. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims recite subject matter within a statutory category as a process (claim 1-14), a machine (claim 15), and a manufacture (claim 16-17) which recite steps of: providing an interactive display of a medical image based on a medical report of a patient, comprising: displaying a medical image of the patient in a first portion of a display; and at least one of the following: in response to a user selecting a region in the displayed medical image, the region associated with a medical finding included in the medical report, displaying a textual representation of the medical finding in a second portion of the display; displaying a textual representation of a medical finding in a second portion of the display, the medical finding included in the medical report and associated with a region in the medical image, and in response to a user selecting the displayed textual representation, displaying, in the first portion of the display, an indicator of the region. These steps of providing an interactive display of a medical image based on a medical report of a patient, displaying a medical image of the patient, receiving user selection of a region in the displayed medical image, displaying a textual representation of a medical finding and displaying an indicator of the region selected by the user, in response to a user selecting the textual representation, as drafted, under the broadest reasonable interpretation, includes methods of organizing human activity. Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claims 2-17, reciting particular aspects of how medical report generation, training of learning algorithms, and/or may be performed in the mind but for recitation of generic computer components). This judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which: amount to mere instructions to apply an exception (such as recitation of an interactive display amounts to invoking computers as a tool to perform the abstract idea, see Applicant’s specification [0008] for a display, see MPEP 2106.05(f)); add insignificant extra-solution activity to the abstract idea (such as recitation of receiving user selection of a region in the displayed medical image receiving user selection of the displayed textual representation amounts to mere data gathering, recitation of displaying various content based on received data amounts to selecting a particular data source or type of data to be manipulated, recitation of displaying medical images, medical reports, and/or textual representations therein amounts to insignificant application, see MPEP 2106.05(g); displaying a medical image of the patient in a first portion of a display, displaying a textual representation of the medical finding in a second portion of the display, displaying, in the first portion of the display, an indicator of the region amounts to gathering and analyzing information using conventional techniques and displaying the result, see MPEP 2106.05(a)(II)(iii), i.e. TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48); generally link the abstract idea to a particular technological environment or field of use (such as recitation of medical images, reports, and/or medical findings, relating the steps recited to the medical field, see MPEP 2106.05(h)). Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-17, which recite limitations a display, an artificial intelligence (AI) program module, a processor, a memory, a non-transitory computer program product, computer readable recording media, additional limitations which amount to invoking computers as a tool to perform the abstract idea see Applicant’s Specification [0008] for a display; see Spec [0096]-[0097] for an AI program module; see Spec [0072] for a processor; see Spec [0057] for a memory; see Spec [0031] for a non-transitory computer program product; see Spec [0031] for a computer readable recording media, see MPEP 2106.05(f); claims 2-5, 7, 13, which recite limitations relating to receiving user input, such as to define a region/regions in the medical image, define medical findings, designate one or more textual representations, additional limitations which add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering; claims 2-4, 7-9, 13, which recite limitations relating to updating one or more medical reports, parameters therein, etc., based on received used inputs, data, findings, etc., additional limitations which add insignificant extra-solution activity to the abstract idea by selecting a particular data source or type of data to be manipulated; claims 2-17, which generally recite limitations therein for performance in the medical field or for medical report generation, additional limitations which generally link the abstract idea to a particular technological environment or field of use; claims 2-4, 6-7, which recite limitations relating to displaying content and/or updating varying content based on received user input/data, and/or claims 8-12 which recite limitations relating to training and/or utilizing an AI program module for making determinations of medical findings, textual representations, etc., and/or claim 14 which recites limitations relating to storing a medical finding as a node of a graph in a knowledge graph representation/database, additional limitations which amount to insignificant application; claims 2-4, 6-7, & 13, which recite limitations relating to displaying regions, indicators, etc., in a user interface, additional limitations which amount to gathering and analyzing information using conventional techniques and displaying the result, see MPEP 2106.05(a)(II)(iii), i.e. TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, add insignificant extra-solution activity to the abstract idea, and generally link the abstract idea to a particular technological environment or field of use. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which: amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields (such as receiving user selection of a region in the displayed medical image receiving user selection of the displayed textual representation, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); determining various content to display based on received data/user input, e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); updating one or more program, e.g. display, instructions based on received user inputs, updating one or more medical images, findings, and/or textual representations based on received data/user inputs e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); storing computerized instructions in a memory for performance of the steps recited, storing one or more received user inputs, storing one or more medical images, storing one or more medical findings, storing one or more textual representations, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv); receiving user input, which under BRI, could include UI or GUI implementation which would include one or more buttons or interfaces for collecting user inputs, e.g., a web browser’s back and forward button functionality, Internet Patent Corp., MPEP 2106.05(d)(II)(ii); displaying medical images, medical findings/indicators, medical reports, and/or textual representations, see Wu Par [0080], Reicher Par [0004] & [0035]-[0036], and Tao Par [0116] which all demonstrate the well-understood, routine, conventional nature of displaying said content for medical report generation/analysis). Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 2-17, additional limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, claims 2-5, 7, 13, which recite limitations relating to receiving user input, such as to define a region/regions in the medical image, define medical findings, designate one or more textual representations, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); claims 2-4, 7-9, 13, which recite limitations relating to updating one or more medical reports, parameters therein, etc., based on received used inputs, data, findings, etc., e.g., performing repetitive calculations, Flook, MPEP 2106.05(d)(II)(ii); claims 2-4, 7-9, 13, which recite limitations relating to updating, i.e. upkeeping and/or maintaining one or more medical reports, parameters therein, etc., based on received used inputs, data, findings, etc., e.g., electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii); claims 2-17, which recite storing computerized instructions for performance of the steps recited throughout the claims, storing one or more received user inputs/data/medical findings, etc., storing a medical finding as a node of a graph in a graph database, e.g. knowledge graph, e.g., storing and retrieving information in memory, Versata Dev. Group, MPEP 2106.05(d)(II)(iv); claims 2-5, 7, 13, which recite limitations relating to receiving user input, such as to define a region/regions in the medical image, define medical findings, designate one or more textual representations, which under BRI, could include UI or GUI implementation which would include one or more buttons or interfaces for collecting user inputs, e.g., a web browser’s back and forward button functionality, Internet Patent Corp., MPEP 2106.05(d)(II)(ii); claims 2-4, 6-7, & 13, which recite limitations relating to displaying regions, indicators, medical images, medical findings/indicators, medical reports, and/or textual representations in a user interface, see Wu Par [0080], Reicher Par [0004] & [0035]-[0036], and Tao Par [0116] which all demonstrate the well-understood, routine, conventional nature of displaying said content for medical report generation/analysis; claims 8-12 which recite limitations relating to training and/or utilizing an AI module for making medical finding determinations and/or textual representation determinations, see Wu Par [0015]-[0018] which describes the generally well-understood, well-known nature of applying artificial intelligence to identify anomalies in medical images and/or medical records; claim 14 which recites limitations relating to storing a medical finding as a node of a graph in a knowledge graph representation/database, see Tao Par [0044] & [0055] which demonstrates the well-understood, routine, and/or conventional nature of generating image semantic representation knowledge graph according to a standardized dictionary library in the field of images and historically accumulated medical image report analysis). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Claim Rejections – 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 8, 11, & 15-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wu et al. (U.S. Patent Publication No. 2021/0313045), hereinafter “Wu”. Claim 1 – Regarding Claim 1, Wu discloses the method for providing an interactive display of a medical image based on a medical report of a patient, the method comprising: displaying, a medical image of the patient in a first portion of a display (See Wu Par [0080] which discloses a system for generating annotated and labeled medical images for viewing via a medical image viewer computing system, so that a human user may view the annotated and labeled medical images that are rendered therein; See Wu Par [0092] and Fig. 1B, Step 190 which discloses generating a final set of medical images with the final bounding region annotations and corresponding anomaly labels that is output to the computer system); and at least one of the following: in response to a user selecting a region in the displayed medical image, the region associated with a medical finding included in the medical report, displaying a textual representation of the medical finding in a second portion of the display (See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study; See Wu Par [0064] which discloses allowing subject matter experts to manually generate bounding regions on a selected subset of medical images, and further allows a radiologist or other user to annotate said bounding regions); displaying a textual representation of a medical finding in a second portion of the display (See Wu Par [0080] which discloses a system for generating annotated and labeled medical images for viewing via a medical image viewer computing system, so that a human user may view the annotated and labeled medical images that are rendered therein), the medical finding included in the medical report and associated with a region in the medical image (See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study), and in response to a user selecting the displayed textual representation, displaying, in the first portion of the display, an indicator of the region (See MPEP 2111.04(II) which states that the broadest reasonable interpretation (BRI) of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the conditions precedent are not met under BRI, therefore it is understood by Examiner that this contingent limitation does not have to be disclosed by Wu to be fully met by Wu, because the contingency of “a user selecting the displayed textual representation” never has to occur under BRI; however, for the sake of advancing prosecution, see Wu Par [0082] which discloses the system automatically generating annotated and labeled medical images and correlating said labels with the automatically generated bounding region annotations to thereby automatically generate annotated and labeled medical images; and while not in response to a suer selection per se, see Wu Par [0080] which discloses an outline of a bounding region having a corresponding label indicating, i.e. indicator, that the bounding region is positive for an anomaly, such as a conspicuous/red color, while other bounding regions that are not positive for anomalies may have a different, less conspicuous coloring; while not relied upon since it is understood by Examiner that this represents a contingent limitation, see Tao Par [0041] which discloses image semantic representation corresponding to a lesion region being able to be viewed simultaneously by clicking a hyperlink, i.e. textual representation). Claim 8 – Regarding Claim 8, Wu discloses the method of claim 1 in its entirety. Wu further discloses a method, wherein: updating the medical report by adding, in the medical report, a medical finding, the medical finding being determined by an artificial intelligence, AI, program module (See Wu Par [0035]-[0036] which discloses that after identifying the bounding regions of the standardized anatomical zones of the masked anatomical structures, the medical imaging report analysis mechanisms identify instances of references to anomalies in the text of the medical imaging report and identify the standardized anatomical zones corresponding to these identified instances of anomalies, i.e. a first and second since multiple identified instances of anomalies and references of anomalies are generated; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones; See Wu Par [0056]-[0057], [0076], & [0083] which discloses the automated medical image annotation and labeling (AMIAL) pipeline having a medical imaging report based anomaly labeling stage such that the system locally labels abnormalities but also actually labels whether each standardized anatomical zone is normal or not). Claim 11 – Regarding Claim 11, Wu discloses the method of claim 8 in its entirety. Wu further discloses a method, wherein: using at least one of the added medical finding and the textual representation of the added medical finding as training data for training the Al module (See Wu Par [0035]-[0036] which discloses that after identifying the bounding regions of the standardized anatomical zones of the masked anatomical structures, the medical imaging report analysis mechanisms identify instances of references to anomalies in the text of the medical imaging report and identify the standardized anatomical zones corresponding to these identified instances of anomalies, i.e. a first and second since multiple identified instances of anomalies and references of anomalies are generated; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones; See Wu Par [0056]-[0057], [0076], & [0083] which discloses the automated medical image annotation and labeling (AMIAL) pipeline having a medical imaging report based anomaly labeling stage such that the system locally labels abnormalities but also actually labels whether each standardized anatomical zone is normal or not). Claim 15 – Regarding Claim 15, Wu discloses an apparatus comprising: at least one processor (See Wu Par [0045]-[0048] which discloses a computer readable storage medium that can retain and store instructions for use by a processor/instruction execution device; See analysis of Claim 1 above); at least one memory, the at least one memory containing instructions executable by the at least one processor such that the apparatus is operable to perform the method of claim 1 (See Wu Par [0045]-[0048] which discloses a computer readable storage medium that can retain and store instructions for use by a processor/instruction execution device; See analysis of Claim 1 above). Claim 16 – Regarding Claim 16, Wu discloses a non-transitory computer program product comprising: program code portions for performing the method of claim 1 when the computer program product is executed on one or more processors (See Wu Par [0045]-[0048] which discloses a computer readable storage medium that can retain and store instructions for use by a processor/instruction execution device; See analysis of Claim 1 above). Claim 17 – Regarding Claim 17, Wu discloses the non-transitory computer program product of claim 16 in its entirety. Wu further discloses a product, wherein: the non-transitory computer program product is stored on one or more computer readable recording medium (See Wu Par [0045][0048] which discloses a computer readable storage medium that can retain and store instructions for use by a processor/instruction execution device; See analysis of Claim 1 above). Claim Rejections – 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-7, 9-10, & 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Wu in view of Reicher et al. (U.S. Patent Publication No. 2018/0055468), hereinafter “Reicher”. Claim 2 – Regarding Claim 2, Wu discloses the method of claim 1 in its entirety. Wu further discloses a method, further comprising: updating the medical report by adding, in the medical report, a medical finding based on user input (See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones). While Wu generally discloses the automated streamlining of performing medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical image to automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones, and these features being trained on previously generated medical reports that are generated based on user inputs, Wu does not explicitly mention updating, i.e. generating a new, medical report by adding a medical finding based on user input, such as after the medical image analyses steps performed. However, Reicher discloses updating the medical report by adding, in the medical report, a medical finding based on user input (See Reicher Par [0005] which discloses updating the first annotation displayed within the medical image to display the first annotation in a first manner different from a second manner used to display a second annotation within the medical image not mapped to any location within the electronic structured report, and specifically mentions at Reicher Par [0076]-[0077] that the reporting application is configured to automatically update a structured report based on modifications to existing annotations and/or manually modifying a structured report by a user, and these modifications occurring based on various user input such as clicking on, hovering over, or otherwise selecting an annotation within the image). The disclosure of Reicher is directly applicable to the disclosure of Wu because the disclosures share limitations and capabilities, such as being directed management of medical images and medical report generation for said medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Wu, which already discloses performing medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical image to automatically annotate and label medical images based on standardized anatomical regions/zones, to further specifically include generating a new/updated medical report by adding a medical finding based on user input, such as after the medical image analyses steps performed, as disclosed by Reicher, because as a reader/user generates, adjusts, and/or changes an annotation for a medical image, the medical report needs to be updated to reflect said new annotation (See Reicher Par [0076]-[0077]). Claim 3 – Regarding Claim 3, Wu and Reicher disclose the method of claim 2 in its entirety. Wu further discloses a method, further comprising: the added medical finding is associated with a region defined by the user in the displayed medical image (See Wu Par [0082] which discloses the system automatically generating annotated and labeled medical images and correlating said labels with the automatically generated bounding region annotations to thereby automatically generate annotated and labeled medical images). Claim 4 – Regarding Claim 4, Wu and Reicher disclose the method of claim 2 in its entirety. Wu further discloses a method, further comprising: displaying, in the first portion of the display, a plurality of indicators of different regions (See Wu Par [0022]-[0024] which discloses an automated computer tool the performs computerized annotations of medical images with bounding regions for the localization of anomalies in medical images; See Wu Fig. 2A-2B which discloses indicators of different regions of interests; See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study), wherein the added medical finding is associated with a group of regions, the group of regions comprising a set of the different regions selected by the user (See Wu Par [0022]-[0024] which discloses an automated computer tool the performs computerized annotations of medical images with bounding regions for the localization of anomalies in medical images). Claim 5 – Regarding Claim 5, Wu and Reicher discloses the method of claim 2 in its entirety. Wu further discloses a method, further comprising: the added medical finding is defined by the user (See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study). Claim 6 – Regarding Claim 6, Wu discloses the method of claim 1 in its entirety. Wu and Reicher further discloses a method, wherein: at least one additional textual representation of a different medical finding associated with the region is displayed along with the textual representation of the medical finding in the second portion of the display (See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study; See Wu Par [0035]-[0036] which discloses bounding regions in the original image are generated based on modified masks to delineate the different standardized anatomical zones in the original image with annotations of the bounding regions, such that having identified the bounding regions of the standardized anatomical zones of the masked anatomical structures, the medical imaging report analysis mechanisms identify instances of references to anomalies in the text of the medical imaging report and identify the standardized anatomical zones corresponding to these identified instances of anomalies, i.e. a first and second since multiple identified instances of anomalies and references of anomalies are generated), wherein the different medical finding is included in the medical report (See Wu Par [0035]-[0036] which discloses that after identifying the bounding regions of the standardized anatomical zones of the masked anatomical structures, the medical imaging report analysis mechanisms identify instances of references to anomalies in the text of the medical imaging report and identify the standardized anatomical zones corresponding to these identified instances of anomalies, i.e. a first and second since multiple identified instances of anomalies and references of anomalies are generated; See Reicher Par [0005] which discloses updating the first annotation displayed within the medical image to display the first annotation in a first manner different from a second manner used to display a second annotation within the medical image not mapped to any location within the electronic structured report, and specifically mentions at Reicher Par [0076]-[0077] that the reporting application is configured to automatically update a structured report based on modifications to existing annotations and/or manually modifying a structured report by a user, and these modifications occurring based on various user input such as clicking on, hovering over, or otherwise selecting an annotation within the image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Wu, which already discloses performing medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical image to automatically annotate and label medical images based on standardized anatomical regions/zones, to further specifically include generating a new/updated medical report by adding a medical finding based on user input, such as after the medical image analyses steps performed, as disclosed by Reicher, because as a reader/user generates, adjusts, and/or changes an annotation for a medical image, the medical report needs to be updated to reflect said new annotation (See Reicher Par [0076]-[0077]). Claim 7 – Regarding Claim 7, Wu and Reicher disclose the method of claim 6 in its entirety. Wu and Reicher further disclose a method, wherein: in response to the user designating one or more of the textual representation and the at least one additional textual representation, updating the medical report by removing, from the medical report, any of the medical findings represented by a textual representation not designated by the user (See MPEP 2111.04(II) which states that the broadest reasonable interpretation (BRI) of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the conditions precedent are not met under BRI, therefore it is understood by Examiner that this contingent limitation does not have to be disclosed by Wu and/or Reicher to be fully met, because the contingency of “the user designating one or more of the textual representation and the at least one additional textual representation” never actually has to occur under BRI; however, for purposes of advancing prosecution, see Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones; See Reicher Par [0005] which discloses updating the first annotation displayed within the medical image to display the first annotation in a first manner different from a second manner used to display a second annotation within the medical image not mapped to any location within the electronic structured report, and specifically mentions at Reicher Par [0076]-[0077] that the reporting application is configured to automatically update a structured report based on modifications to existing annotations and/or manually modifying a structured report by a user, and these modifications occurring based on various user input such as clicking on, hovering over, or otherwise selecting an annotation within the image; See Reicher Par [0082] which discloses executing a stored rule/automated action based on whether the lesion is labeled one or more times in the other medical images, i.e. not designated by the current user or viewer of the medical image, i.e. such that at least one automatic action may include deleting the annotation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Wu, which already discloses performing medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical image to automatically annotate and label medical images based on standardized anatomical regions/zones, to further specifically include generating a new/updated medical report by adding a medical finding based on user input, such as after the medical image analyses steps performed, as disclosed by Reicher, because as a reader/user generates, adjusts, and/or changes an annotation for a medical image, the medical report needs to be updated to reflect said new annotation (See Reicher Par [0076]-[0077]). Claim 9 – Regarding Claim 9, Wu and Reicher disclose the method of claim 7 in its entirety. Wu and Reicher further disclose a method, wherein: updating the medical report by adding, in the medical report, a medical finding, the medical finding being determined by an artificial intelligence, AI, program module (See Wu Par [0035]-[0036] which discloses that after identifying the bounding regions of the standardized anatomical zones of the masked anatomical structures, the medical imaging report analysis mechanisms identify instances of references to anomalies in the text of the medical imaging report and identify the standardized anatomical zones corresponding to these identified instances of anomalies, i.e. a first and second since multiple identified instances of anomalies and references of anomalies are generated; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones; See Wu Par [0056]-[0057], [0076], & [0083] which discloses the automated medical image annotation and labeling (AMIAL) pipeline having a medical imaging report based anomaly labeling stage such that the system locally labels abnormalities but also actually labels whether each standardized anatomical zone is normal or not; See Reicher Par [0005] which discloses updating the first annotation displayed within the medical image to display the first annotation in a first manner different from a second manner used to display a second annotation within the medical image not mapped to any location within the electronic structured report, and specifically mentions at Reicher Par [0076]-[0077] that the reporting application is configured to automatically update a structured report based on modifications to existing annotations and/or manually modifying a structured report by a user, and these modifications occurring based on various user input such as clicking on, hovering over, or otherwise selecting an annotation within the image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Wu, which already discloses performing medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical image to automatically annotate and label medical images based on standardized anatomical regions/zones, to further specifically include generating a new/updated medical report by adding a medical finding based on user input, such as after the medical image analyses steps performed, as disclosed by Reicher, because as a reader/user generates, adjusts, and/or changes an annotation for a medical image, the medical report needs to be updated to reflect said new annotation (See Reicher Par [0076]-[0077]). Claim 10 – Regarding Claim 10, Wu and Reicher disclose the method of claim 9 in its entirety. Wu further discloses a method, wherein: using at least one of the designated textual representation and the medical finding represented by the designated textual representation as training data for training the Al module (See Wu Par [0035]-[0036] which discloses that after identifying the bounding regions of the standardized anatomical zones of the masked anatomical structures, the medical imaging report analysis mechanisms identify instances of references to anomalies in the text of the medical imaging report and identify the standardized anatomical zones corresponding to these identified instances of anomalies, i.e. a first and second since multiple identified instances of anomalies and references of anomalies are generated; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones; See Wu Par [0056]-[0057], [0076], & [0083] which discloses the automated medical image annotation and labeling (AMIAL) pipeline having a medical imaging report based anomaly labeling stage such that the system locally labels abnormalities but also actually labels whether each standardized anatomical zone is normal or not). Claim 12 – Regarding Claim 12, Wu and Reicher disclose the method of claim 10 in its entirety. Wu further discloses a method, wherein: using at least one of the added medical finding and the textual representation of the added medical finding as training data for training the Al module (See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones). Claim 13 – Regarding Claim 13, Wu and Reicher disclose the method of claim 2 in its entirety. Wu and Reicher further disclose a method, wherein: at least one of steps (b) and (c) is repeated after updating the medical report (See Wu Par [0034] which discloses the associated medical imaging report is used to label, i.e. define, each standardized anatomical region/zone identified by the bounding regions, in the bounding region annotated medical image, as positive or negative for the corresponding anomaly (finding), resulting in a set of labeled, i.e. defined, bounding regions per medical image in the medical imaging study; See Wu Par [0081] which discloses training of a downstream AI and cognitive computing system, such that the AI can be designed to perform medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical images; See Wu Par [0082]-[0084] which further discloses that the AI/cognitive system can automatically annotate and label medical images based on standardized anatomical regions/zones and are able to correct the resulting bounding regions based on expected geometries and expected correspondence between medical image features and the standardized anatomical regions/zones; See Reicher Par [0005] which discloses updating the first annotation displayed within the medical image to display the first annotation in a first manner different from a second manner used to display a second annotation within the medical image not mapped to any location within the electronic structured report, and specifically mentions at Reicher Par [0076]-[0077] that the reporting application is configured to automatically update a structured report based on modifications to existing annotations and/or manually modifying a structured report by a user, and these modifications occurring based on various user input such as clicking on, hovering over, or otherwise selecting an annotation within the image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Wu, which already discloses performing medical image analysis, such as anomaly detection and location in non-annotated and non-labeled medical image to automatically annotate and label medical images based on standardized anatomical regions/zones, to further specifically include generating a new/updated medical report by adding a medical finding based on user input, such as after the medical image analyses steps performed, as disclosed by Reicher, because as a reader/user generates, adjusts, and/or changes an annotation for a medical image, the medical report needs to be updated to reflect said new annotation (See Reicher Par [0076]-[0077]). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Wu in view of Tao et al. (U.S. Patent Publication No. 2020/0303062), hereinafter “Tao”. Claim 14 – Regarding Claim 14, Wu discloses the method of claim 1 in its entirety. Wu does not further disclose a method, wherein: the medical finding is stored as a node of a graph in a graph database. That is, Wu generally discloses receiving and/or updating medical findings associated with one or more medical images/regions of interest, but Wu does not explicitly disclose the medical finding is stored as a node of a graph in a graph database. However, the medical finding is stored as a node of a graph in a graph database (it is understood by Examiner that in light of Applicant’s Specification, a graph database and/or graph is most likely referring to a knowledge graph, therefore see Tao Par [0044] & [0055] which discloses a knowledge graph establishment module configured to establish an image semantic representation knowledge graph according to a standardized dictionary library in the field of images and historically accumulated medical image report analysis, such that an image semantic representation knowledge graph and a variety of machine learning are combined to perform medical image recognition, sample images can be systematically and deeply accumulated, and the image semantic representation knowledge graph can be continuously improved, so that labeled focuses of many images can be continuously collected under the same sub-label; See Tao Par [0160] which discloses once a new discovery, i.e. medical finding, is confirmed, the new knowledge is added to the image semantic representation knowledge graph, such that it is understood that this would constitute being stored in a node of a knowledge graph). The disclosure of Tao is directly applicable to the disclosure of Wu because both disclosures share limitations and capabilities, such as being directed towards medical image processing and analysis for generation of one or more medical reports. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the disclosure of Wu, which already discloses receiving and/or updating medical findings associated with one or more medical images/regions of interest, to further include the medical finding being stored as a node of a graph in a graph database, as disclosed by Tao, because by storing updated medical findings in a node of a knowledge graph, an image semantic representation knowledge graph and a variety of machine learning can be combined to perform medical image recognition, sample images can be systematically and deeply accumulated, and the image semantic representation knowledge graph can be continuously improved, so that labeled focuses of many images can be continuously collected under the same sub-label (See Tao Par [0044] & [0055]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gallix et al. (U.S. Patent Publication No. 2017/0262584) discloses a system for automatically generating imaging reports including analyzing contextual information related to examination, analyzing data contained in the examination report, eliciting and producing relevant information from and within the collected and produced data, based on results of the contextual information analysis and of the report data analysis, and displaying the relevant information, in a simplified multi-dimensional manner as an interactive visual imaging report; Sorenson et al. (U.S. Patent Publication No. 2019/0392942) discloses a system including a findings engine that receives medical image data and generates findings based on the medical image data and image interpretation algorithms, and an adjustment engine allows the user to adjust the findings to produce a report; Glottmann et al. (U.S. Patent Publication No. 2020/0321100) discloses a system for analysis and generation of medical imaging reports, including automated analysis of radiological information, such as medical images and related text statements for discrepancy analysis, accuracy analysis and quality assurance. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNTER J RASNIC whose telephone number is (571)270-5801. The examiner can normally be reached M-F 8am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.R./Examiner, Art Unit 3684 /Shahid Merchant/Supervisory Patent Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12142364
SYSTEMS AND METHODS THAT PROVIDE A POSITIVE EXPERIENCE DURING WEIGHT MANAGEMENT
2y 5m to grant Granted Nov 12, 2024
Patent 11961606
Systems and Methods for Processing Medical Images For In-Progress Studies
2y 5m to grant Granted Apr 16, 2024
Patent 11908558
PROSPECTIVE MEDICATION FILLINGS MANAGEMENT
2y 5m to grant Granted Feb 20, 2024
Patent 11875904
IDENTIFICATION OF EPIDEMIOLOGY TRANSMISSION HOT SPOTS IN A MEDICAL FACILITY
2y 5m to grant Granted Jan 16, 2024
Patent 11862314
METHODS AND SYSTEMS FOR PATIENT CONTROL OF AN ELECTRONIC PRESCRIPTION
2y 5m to grant Granted Jan 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
11%
Grant Probability
32%
With Interview (+20.5%)
4y 7m
Median Time to Grant
Low
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month