DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Receipt of Applicant’s Amendment filed November 20, 2025, is acknowledged.
Response to Amendment
Claims 89, 93, 97, 98, 100-102, and 104-106 have been amended. Claims 1-88, 91, 92, and 95 have been canceled. Claims 110 and 111 are new. Claims 89, 90, 93, 94, 96-111 are pending and are provided to be examined upon their merits.
Response to Arguments
Applicant’s arguments with respect to claims 89, 90, 93, 94, 96-111 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A response is provided below in bold where appropriate.
Applicant argues 35 USC §101 Rejection, starting pg. 6 of Remarks:
35 U.S.C. § 101
Claims 89-109 stand rejected under 35 U.S.C. § 101. Applicant respectfully disagrees.
However, without conceding in the basis of the rejections and solely to expedite prosecution of this Application, claim 89 is amended, as provided above.
As amended, the claims do not recite a method of organizing human activity or a mental process. Rather, they are directed to a specific, computer-implemented solution to a technical problem in medical image analysis and report generation.
Respectfully, providing patient information such as findings and medical reports are teaching, which is an abstract as managing personal behavior or interactions between people.
The claims recite the use of a tag associated with an ontology and further leverage that ontology to incorporate a link into a medical report. The ontology is not a generic labeling scheme or static lookup table; it represents a structured, machine-readable framework that encodes a system of relationships between anatomic terms. These tags are not capable of being applied by a human or mentally determined; rather, they are retrieved and applied using digital memory operations and hierarchical logic.
As claimed, a person can write tags on anatomic structures (paper, x-ray, etc.) with a pen.
The ontology is also used downstream to associate a generated link with a term of anatomic terms related to a computer-generated finding. This ontology-based tagging provides consistency across variations in terminology, which improves how image data and associated findings are indexed, queried, re-accessed, and displayed. This process involves accessing ontology data stored in memory, retrieving the corresponding identifier for the detected anatomic structure, and storing that identifier as a tag in association with the image data and the AI- generated finding. These are computer-implemented data retrieval and storage operations, not acts that can be practically performed in the human mind or by using pen and paper.
A person with pen can generate links (draw lines/arrows) corresponding to anatomic structure and associate the links (drawing lines/arrows) to a term.
Additionally, claim 89 recites that the computer-generated finding is produced by an AI- based image analysis algorithm, an operation fundamentally distinct from any mental process. Humans do not replicate artificial-based image analysis algorithms, nor do they map ontological tags to computer-readable identifiers in memory. The use of AI, in combination with structured tagging and digital linking, defines a machine-executed workflow that lies outside of a mental process.
Respectfully, map is not claimed. The AI appears to be generic artificial intelligence claimed at a high level of generality. Further, a person can generate a finding associated with anatomic structure in their mind or with pen and paper.
Furthermore, the elements of the claims are integrated into a practical application that improves the functioning of computer-based medical reporting systems. For example, the system assigns a tag to an anatomic structure using an ontology that allows for consistent, machine- readable mapping of anatomic structures to terminology. This function improves how anatomic structures in medical images are indexed, stored, and re-accessed.
Computer technology itself is not improved. Generate a link, even if not abstract, is recited at a high level of generality. There is also no teaching of an improvement to technology doing this.
Additionally, the ontology is used to associate a link corresponding to the anatomic structure in a medical image to a word or phrase related to a computer-generated finding in a medical report. This linkage helps ensure that the report language is linked to relevant text within the medical report.
Furthermore, the claim recites that selection of the link in said medical report causes retrieval and display of said medical image for viewing. This feature improves the usability and functionality of the medical report, enabling users, such as physicians, to navigate directly to relevant image features from within the report.
With all due respect, if the link is a hyperlink, this is not an improvement to technology but a use of existing software capabilities. There is no indication that Applicant invented or improved hyperlinks.
As such, the amended claims are directed to a specific, computer-implemented solution to a technical problem in radiology reporting and image analysis. They are not directed to an abstract idea and are useful for practical applications.
Accordingly, Applicant respectfully requests that the § 101 rejection of claim 89 be withdrawn.
Claims 90, 93-94, and 96-109 depend from and include all of the elements of claim 1, and recite additional elements of particular advantage and utility. The 35 U.S.C. § 101 rejections of claims 91, 92, and 95 are moot in view of their cancellation. Applicant respectfully requests the 35 U.S.C. § 101 rejections of claims 89-90, 93-94, and 96-109 be withdrawn.
Using computers to perform a judicial exception is not enough as claimed to make abstract claims statutory. Also using artificial intelligence at a high level of generality is not enough and there is no claim related to improving AI technology itself. Based on the above response, the rejection is respectfully maintained but modified for the claim amendments.
Applicant argues 35 USC §112 Rejection, pg. 8 of Remarks:
35 U.S.C. §112
Claims 89-109 stand rejected under 35 U.S.C. §112(b). Applicant respectfully disagrees.
However, without conceding in the basis of the rejections and solely to expedite prosecution of this Application, Applicant has amended claim 89 to clarify the claimed subject matter. As amended, claim 89 specifies that a medical report is generated, and that a link is incorporated into said medical report, "wherein [said] selection of said link in said medical report causes retrieval and display of said medical image for viewing." the display of the link in the medical report could be used to retrieve the medical image at a different time from the initial display in step (i). The claim does not recite simultaneous display of the medical image and the medical report; rather, it recites two distinct and temporally separate display events.
Furthermore, the "said" in "said selection" has been removed.
Claims 90, 93-94, and 96-109 depend from and include all of the elements of claim 1, and recite additional elements of particular advantage and utility. Applicant respectfully submits that the 35 U.S.C. § 112 rejections of claims 91, 92, and 95 are moot in view of the cancellation of these claims. Applicant respectfully requests the 35 U.S.C. § 112 rejections of claims 89-90, 93-94, and 96-109 be withdrawn.
The rejections are withdrawn based on the claim amendments.
Applicant argues 35 USC §103 Rejection, starting pg. 9 of Remarks:
New prior art is cited to teach the amended claims, rendering the arguments moot.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 89, 90, 93, 94, 96-111 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 89, 90, 93, 94, 96-111 are directed to a system, which is a statutory categories of invention. (Step 1: YES).
The Examiner has identified system Claim 89 as the claim that represents the claimed invention for analysis.
Claim 89 recites the limitations of:
A computer-based system for generating or processing a medical report, comprising:
(a) a processor;
(b) a display configured to show a graphical user interface (GUI);
(c) a non-transitory computer readable storage medium encoded with a computer program that causes said processor to:
(i) display a medical image on said GUI of said display, wherein said medical image comprises an anatomic structure;
(ii) assign a tag to said anatomic structure, wherein said tag comprises a unique identifier associated with said anatomic structure, said tag is associated with an ontology, and said ontology comprises a system of relationships between anatomic terms:
(iii) generate a medical report comprising a computer-generated finding associated with said anatomic structure, wherein said computer-generated finding is produced by an artificial intelligence-based algorithm analyzing said medical image:
iv) generate a link corresponding to said anatomic structure; and
(v) incorporate said link into said medical report, wherein said ontology is used to associate said link with a term of said anatomic terms related to said computer-generated finding, and selection of said link in said medical report causes retrieval and display of said medical image for viewing.
These above limitations, under their broadest reasonable interpretation, cover performance of the limitation as certain methods of organizing human activity. The claim recites elements, in non-bold above, which covers performance of the limitation as managing personal behavior. Display medical image of an anatomic structure, assign a tag to said anatomic structure (follow rules or instructions), generate a medical report comprising a finding associated with said anatomic structure (teaching), generate a link corresponding to said anatomic structure, and incorporate said link into said medical report and selection of said link for retrieval and display of medical image (following rules or instructions) for viewing (teaching) are steps managing personal behavior. Retrieving medical image for viewing is retrieving a patient’s image by a user for viewing, which is accessing a patient’s image, therefore, also managing interactions between people. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as managing personal behavior or interactions between people, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claim 89 is abstract. (Step 2A-Prong 1: YES. The claims are abstract)
In as much as the claim displays an image of an anatomic structure,
assigns a tag to said anatomic structure, generates a medical report finding associated with the anatomic structure, generate a link corresponding to said anatomic structure, and incorporates said link into said medical report and causes retrieval an viewing of medical image the claims are also abstract as a mental process. A person can display a medical image of an anatomic structure, such as hold an x-ray in front of a light box, write with pen a tag on an anatomic structure, generate a medical report finding with pen and paper, generate a link (draw a line/arrow) with pen on an image and mark with a code, use pen to mark (link) a medical report with the code, and use the code to find the image associated with the medical report. See also paragraphs [0002] and [0004] of the specification where interpretation of images is a manual process. Also, MPEP 2106.04(a)(2) III C where using a generic computer for a judicial exception was shown to fall under Mental Processes.
This judicial exception is not integrated into a practical application. In particular, the claims only recite: processor, display, computer readable storage medium, GUI, artificial intelligence. The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The generate a link is recited at a high-level of generality and is itself abstract as it further limits displaying images to a user. The GUI and artificial intelligence are recited at a high level of generality. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claim 89 is directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application)
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer hardware amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Applicant’s specification para. [00315] about implementation using general purpose computing devices and MPEP 2106.05(f) where applying a computer as a tool is not indicative of significantly more. Steps such as retrieve (receiving) are steps that are considered insignificant extra solution activity and mere instructions to apply the exception using general computer components (see MPEP 2106.05(d), II). Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus claim 89 is are not patent eligible. (Step 2B: NO. The claims do not provide significantly more)
Dependent claims 90, 93, 94, 96-111 further define the abstract idea that is present in their independent claim 1 and thus correspond to Certain Methods of Organizing Human Activity and Mental Processes and hence are abstract for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Claim 94 recites audio detection component, which appears to be a generic device applied at a high level of generality. Claims 95, 98, 102, 104, 106, 107, and 109 recite processor being applied at a high level of generality. Claim 106 recites eye-tracking component, which appears to be a generic device applied at a high level of generality. Claim 110 recites said GUI at a high level of generality. Therefore, the claims 90, 93, 94, 96-111 are directed to an abstract idea. Thus, the claims 89, 90, 93, 94, 96-111 are not patent-eligible.
Examiner Request
The Applicant is requested to indicate where in the specification there is support for amendments to claims should Applicant amend. The purpose of this is to reduce potential 35 U.S.C. §112(a) or §112 1st paragraph issues that can arise when claims are amended without support in the specification. The Examiner thanks the Applicant in advance.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 89, 90, 93, 94, 96, and 105 are rejected under 35 U.S.C. 103 as being unpatentable over Pub. No. US 2020/0395119 to Lyman et al. in view of 2020/0126648 to Schadewaldt et al.
Regarding claim 89
A computer-based system for generating or processing a medical report, comprising:
(a) a processor;
Lyman et al. teaches:
Processor of a processing system…
“…The medical scan image analysis system can include a processing system that includes a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations.” [0049]
Generate medial data and text (medical report)…
“The medical scan assisted review system 102 can be operable to receive, via a network, a medical scan for review. Abnormality annotation data can be generated by identifying one or more of abnormalities in the medical scan by utilizing a computer vision model that is trained on a plurality of training medical scans. The abnormality annotation data can include location data and classification data for each of the plurality of abnormalities and/or data that facilitates the visualization of the abnormalities in the scan image data. Report data including text describing each of the plurality of abnormalities is generated based on the abnormality data. The visualization and the report data, which can collectively be displayed annotation data, can be transmitted to a client device. A display device associated with the client device can display the visualization in conjunction with the medical scan via an interactive interface, and the display device can further display the report data via the interactive interface.” [0038]
(b) a display configured to show a graphical user interface (GUI);
Example of display…
“The medical scan assisted review system 102 can be used to aid medical professionals or other users in diagnosing, triaging, classifying, ranking, and/or otherwise reviewing medical scans by presenting a medical scan for review by a user by transmitting medical scan data of a selected medical scan and/or interface feature data of selected interface features of to a client device 120 corresponding to a user of the medical scan assisted review system for display via a display device of the client device. The medical scan assisted review system 102 can generate scan review data for a medical scan based on user input to the interactive interface displayed by the display device in response to prompts to provide the scan review data, for example, where the prompts correspond to one or more interface features.” [0037]
Interactive interface with menu data and touchscreen display (graphical user interface)…
“The one or more processing devices 230 can display interactive interface 275 on the one or more client display devices 270 in accordance with one or more of the client applications 202, 204, 206, 208, 210, 212, 214, and/or 216, for example, where a different interactive interface 275 is displayed for some or all of the client applications in accordance with the website presented by the corresponding subsystem 102, 104, 106, 108, 110, 112, 114 and/or 116. The user can provide input in response to menu data or other prompts presented by the interactive interface via the one or more client input devices 250, which can include a microphone, mouse, keyboard, touchscreen of display device 270 itself or other touchscreen, and/or other device allowing the user to interact with the interactive interface. The one or more processing devices 230 can process the input data and/or send raw or processed input data to the corresponding subsystem, and/or can receive and/or generate new data in response for presentation via the interactive interface 275 accordingly, by utilizing network interface 260 to communicate bidirectionally with one or more subsystems and/or databases of the medical scan processing system via network 150.” [0054]
“FIGS. 15C-15V are graphical illustrations of an example interactive interface displayed on a client device in conjunction with various embodiments;”
(c) a non-transitory computer readable storage medium encoded with a computer program that causes said processor to:
Memory…
“…The medical scan image analysis system can include a processing system that includes a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations.” [0049]
(i)display a medical image on said GUI of said display, wherein said medical image comprises an anatomic structure;
[No Patentable Weight is given to non-functional descriptive claim language of display a medical image comprises an anatomic structure.]
Fig. 15P is example of display of medical image of lungs (anatomic structure) with interactive user interface (GUI)…
PNG
media_image1.png
376
612
media_image1.png
Greyscale
“FIGS. 15C-15V are graphical illustrations of an example interactive interface displayed on a client device in conjunction with various embodiments;”
(ii) assign a tag to said anatomic structure, wherein said tag comprises a unique identifier associated with said anatomic structure, said tag is associated with an ontology, and said ontology comprises a system of relationships between anatomic terms:
Scan classifier data include region data and scan of anatomical region (chest, head, etc.)…
“Scan classifier data 420 can indicate classifying data of the medical scan. Scan classifier data can include scan type data 421, for example, indicating the modality of the scan. The scan classifier data can indicate that the scan is a CT scan, x-ray, Mill, PET scan, Ultrasound, EEG, mammogram, or other type of scan. Scan classifier data 420 can also include anatomical region data 422, indicating for example, the scan is a scan of the chest, head, right knee, or other anatomical region…” [0063]
Medical scan identifiers or labels (tag) for anatomical region…
“The medical scan set request can indicate particular identifiers or criteria that the set of medical scans should meet. In some embodiments, a set of medical scans that meet the criteria are randomly or pseudo-randomly selected. In some embodiments, the set of medical scans do not have any diagnosis data 440 or other labeling data associated with them in the medical scan database and/or are otherwise selected in response to determining they need to be labeled. The criteria used to select the set of medical scans can include a number of medical scans to be selected, at least one desired modality, at least one desired anatomical region, other features of scan classifier data 420, an urgency level, a recency that the medical scans were added to the system, or other criteria. This criteria can be determined via user input by an administrator and/or can be determined automatically by the medical scan labeling quality assurance system 3004, for example, based on one or more scan categories determined to best assess the set of labelers. For example, in response to determining previous sets of medical scans did not include a threshold number of medical scans corresponding to a scan category, at least one medical scan, or at least a threshold number of medical scans, of that scan category can be included in the set of medical scans.” [0309]
See Tag and Ontology below.
(iii) generate a medical report comprising a computer-generated finding associated with said anatomic structure, wherein said computer-generated finding is produced by an artificial intelligence-based algorithm analyzing said medical image:
Use anatomical region-specific scan category and generate diagnosis data (finding)…
“Training on these categorized sets separately can ensure each medical scan inference function 1105 is calibrated according to its scan category 1120, for example, allowing different inference functions to be calibrated on type specific, anatomical region specific, hospital specific, machine model specific, and/or region-specific tendencies and/or discrepancies. Some or all of the medical scan inference functions 1105 can be trained by the medical scan image analysis system and/or the medical scan natural language processing system, and/or some medical scan inference functions 1105 can utilize both image analysis and natural language analysis techniques to generate inference data 1110. For example, some or all of the inference functions can utilize image analysis of the medical scan image data 410 and/or natural language data extracted from abnormality annotation data 442 and/or report data 449 as input, and generate diagnosis data 440 such as medical codes 447 as output. Each medical scan inference function can utilize the same or different learning models to train on the same or different features of the medical scan data, with the same or different model parameters, for example indicated in the model type data 622 and model parameter data 623. Model type and/or parameters can be selected for a particular medical scan inference function based on particular characteristics of the one or more corresponding scan categories 1120, and some or all of the indicated in the model type data 622 and model parameter data 623 can be selected automatically by a subsystem during the training process based on the particular learned and/or otherwise determined characteristics of the one or more corresponding scan categories 1120.” [0109]
Artificial intelligence…
“Having determined the subregion training set 1315 of three-dimensional subregions 1310 corresponding to the set of full medical scans in the training set, the medical scan image analysis system can complete a training step 1352 by performing a learning algorithm on the plurality of three-dimensional subregions to generate model parameter data 1355 of a corresponding learning model. The learning model can include one or more of a neural network, an artificial neural network, a convolutional neural network, a Bayesian model, a support vector machine model, a cluster analysis model, or other supervised or unsupervised learning model. The model parameter data 1355 can generated by performing the learning algorithm 1350, and the model parameter data 1355 can be utilized to determine the corresponding medical scan image analysis functions. For example, some or all of the model parameter data 1355 can be mapped to the medical scan analysis function in the model parameter data 623 or can otherwise define the medical scan analysis function.” [0132]
generate a link corresponding to said anatomic structure; and
An identifier to link (therefore, generating a link) to a DICOM (Digital Imaging and Communications in Medicine, format) image…
“Once the annotation data is generated by performing the selected inference function, the annotating system 2612 can generate an annotated DICOM file for transmission to the medical image picture system 2620 for storage. The annotated DICOM file can include some or all of the fields of the diagnosis data 440 and/or abnormality annotation data 442 of FIGS. 4A and 4B. The annotated DICOM file can include scan overlay data, providing location data of an identified abnormality and/or display data that can be used in conjunction with the original DICOM image to indicate the abnormality visually in the DICOM image and/or to otherwise visually present the annotation data, for example, for use with the medical scan assisted review system 102. For example, a DICOM presentation state file can be generated to indicate the location of an abnormality identified in the de-identified medical scan. The DICOM presentation state file can include an identifier of the original DICOM image, for example, in metadata of the DICOM presentation state file, to link the annotation data to the original DICOM image. In other embodiments, a full, duplicate DICOM image is generated that includes the annotation data with an identifier linking this duplicate annotated DICOM image to the original DICOM image.” [0159]
Links with scan images…
“The diagnosis data can include report data 449 that includes at least one medical report, which can be formatted to include some or all of the medical codes 447, some or all of the natural language text data 448, other diagnosis data 440, full or cropped images slices formatted based on the display parameter data 470 and/or links thereto, full or cropped images slices or other data based on similar scans of the similar scan data 480 and/or links thereto, full or cropped images or other data based on patient history data 430 such as longitudinal data 433 and/or links thereto, and/or other data or links to data describing the medical scan and associated abnormalities. The diagnosis data 440 can also include finalized diagnosis data corresponding to future scans and/or future diagnosis for the patient, for example, biopsy data or other longitudinal data 433 determined subsequently after the scan. The medical report of report data 449 can be formatted based on specified formatting parameters such as font, text size, header data, bulleting or numbering type, margins, file type, preferences for including one or more full or cropped image slices 412, preferences for including similar medical scans, preferences for including additional medical scans, or other formatting to list natural language text data and/or image data, for example, based on preferences of a user indicated in the originating entity data 423 or other responsible user in the corresponding report formatting data.” [0075]
Where scan represents anatomical region…
“The medical scan image analysis system 112 can be operable to receive a plurality of medical scans that represent a three-dimensional anatomical region and include a plurality of cross-sectional image slices. A plurality of three-dimensional subregions corresponding to each of the plurality of medical scans can be generated by selecting a proper subset of the plurality of cross-sectional image slices from each medical scan, and by further selecting a two-dimensional subregion from each proper subset of cross-sectional image slices. A learning algorithm can be performed on the plurality of three-dimensional subregions to generate a neural network. Inference data corresponding to a new medical scan received via the network can be generated by performing an inference algorithm on the new medical scan by utilizing the neural network. An inferred abnormality can be identified in the new medical scan based on the inference data.” [0050]
(v) incorporate said link into said medical report, wherein said ontology is used to associate said link with a term of said anatomic terms related to said computer-generated finding, and selection of said link in said medical report causes retrieval and display of said medical image for viewing.
[No Patentable Weight is given to intended use language of “for viewing” as viewing may never happen.}
Medical report to include images with links thereto…
“The diagnosis data can include report data 449 that includes at least one medical report, which can be formatted to include some or all of the medical codes 447, some or all of the natural language text data 448, other diagnosis data 440, full or cropped images slices formatted based on the display parameter data 470 and/or links thereto, full or cropped images slices or other data based on similar scans of the similar scan data 480 and/or links thereto, full or cropped images or other data based on patient history data 430 such as longitudinal data 433 and/or links thereto, and/or other data or links to data describing the medical scan and associated abnormalities. The diagnosis data 440 can also include finalized diagnosis data corresponding to future scans and/or future diagnosis for the patient, for example, biopsy data or other longitudinal data 433 determined subsequently after the scan. The medical report of report data 449 can be formatted based on specified formatting parameters such as font, text size, header data, bulleting or numbering type, margins, file type, preferences for including one or more full or cropped image slices 412, preferences for including similar medical scans, preferences for including additional medical scans, or other formatting to list natural language text data and/or image data, for example, based on preferences of a user indicated in the originating entity data 423 or other responsible user in the corresponding report formatting data.” [0075]
Example of human viewing the medical report and/or image…
“The global text fiducials and/or global image fiducials can be recognizable by inference functions and/or training functions, for example, where the global text fiducials and global image fiducials are ignored when processed in a training step to train an inference function and/or are ignored in an inference step when processed by an inference function. Furthermore, the global text fiducials and/or global image fiducials can be recognizable by a human viewing the header, medical report, and/or image data. For example, a radiologist or other medical professional, upon viewing a header, medical report, and/or image data, can clearly identify the location of a patient identifier that was replaced by the fiducial and/or can identify the type of patient identifier that was replaced by the fiducial.” [0227]
See Link below.
Tag and Ontology
Lyman et al. teaches identifiers or labels (tags). They do not literally teach tags or ontology.
Schadewaldt et al. also in the business of tags teaches:
Generating (assign) image tags to anatomical features, where tag is associated with medical ontology…
“To provide these features, a report-images linkage component 50 is provided. The illustrative linkage component 50 is implemented on the server computer 20, which may be the same server computer 20 that implements the PACS 24 (as shown) or may be a different computer server in communication with the PACS. The linkage component includes an anatomical features tagger 52 for generating a set of image tags identifying anatomical features in the at least one radiology image 30, a clinical concepts tagger 54 for generating a set of report tags identifying clinical concepts in passages of the radiology report 32, and a medical ontology 56 for linking the clinical concepts and the anatomical features.” [0025]
Where image tags with metadata including patient identifier (PID), data of acquisition, etc. (therefore, unique identifier)…
“The radiology viewer workstation 10 retrieves a radiology examination 22 from a radiology examinations data storage, such as an illustrative Picture Archiving and Communication System (PACS) 24. Diagrammatic FIG. 1 illustrates a single illustrative radiology examination 22; however, it will be understood that that PACS 24 typically stores all radiology reports for a given patient, and for all patients who have been imaged by the radiology department or other radiology imaging service, suitably indexes by parameters such as patient identifier (PID), date of examination, date of radiology reading, imaging modality, imaged anatomical region, and/or so forth. The illustrative radiology examination 22 includes a set of radiology images 30 and a radiology report 32. The radiology images 30 could be as few as a single image, though in most cases the radiology examination 22 will, as shown in FIG. 1, include a plurality of images. Each image typically has metadata stored with the images, for example as image tags in a standard DICOM format. These tags may, for example, identify PID, date of acquisition, imaging acquisition parameters, and so forth…” [0021]
Ontology consulted to identify anatomical features (plural) related to clinical concept, therefore ontology has relationships between anatomical features…
“With reference to FIGS. 3 and 4, illustrative processing for executing the step S3 of FIG. 2 are described. FIG. 3 depicts the process for highlighting relevant anatomical feature in the image in response to user selection of a passage of the radiology report. In an operation S10, the user selection of the report passage at the workstation 10 using one of the user interface devices 14, 16, 18 is detected. For example, the user may click on a word or sentence of the report. In an operation S12, the clinical concepts described or mentioned in the selected passage are identified by referencing the contextual tags of the radiology report 32. In an operation S14, the ontology 56 is consulted to identify corresponding anatomical feature(s) that are related to the identified clinical concept. In an operation S16, the image tags are consulted to identify the corresponding anatomical feature(s) in the radiology image. In an operation S18, the anatomical feature(s) are highlighted in the image (portion) displayed in the image window 40, and optionally the selected passage of the report is also highlighted in the report window 42.” [0034]
Example of augment (relationship) domain-specific ontology content (cardiac) with lay terms such as heart, therefore relationships between anatomical terms…
“The medical ontology 56 employed for mining clinical concepts is generally a domain-specific ontology, specifically a highly specialized medical or radiology ontology. However, for assisting the lay patient in understanding his or her radiology examination, it is contemplated to augment the domain-specific ontology content with lay terms that may be more comprehensible to the patient. For example, terms such as “cardiac” may be augmented by “heart”, or so forth.” [0038]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of Lyman et al. the ability to tag anatomical features and use ontology as taught by Schadewaldt et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Schadewaldt et al. who teaches the advantages of tagging features and using ontology for finding information. Lyman benefits as they also need to locate information for viewing.
Link
The combined references teach link. They do not specifically teach selection of link and ontology.
Schadewaldt et al. also in the business of tags teaches:
Selected features and linkages in viewer (medical report) and associated with anatomic term…
“The radiology viewer leverages the thusly generated image tags and report tags to enable the display of automated linkages 70 between user-selected anatomical features of the images 30 and corresponding passages of the radiology report 32; or, conversely, enables automated display of linkages 70 between user-selected passages of the radiology report 32 and corresponding anatomical features of the images 30. For example, if the user selects the liver in a radiology image then the image tags are consulted to determine that the point selected in the image is the liver, then the report tags are searched to identify clinical concepts (if any) relating to the liver by searching those clinical concepts in the ontology 56 to detect references of the concepts to the liver, and finally the corresponding passages of the radiology report 32 are highlighted in the report window 42. Conversely, if the user selects a passage containing the keyword “cirrhosis” in the radiology report 32, then the report tags are consulted to determine that the selected passage pertains to the clinical concept of cirrhosis of the liver, then the image tags are searched to identify the liver in the radiology image(s) 30, and finally the identified liver anatomical feature is highlighted in the image window 40.” [0028]
Example of augment (relationship) domain-specific ontology content (cardiac) with lay terms such as heart, therefore relationships between anatomical terms…
“The medical ontology 56 employed for mining clinical concepts is generally a domain-specific ontology, specifically a highly specialized medical or radiology ontology. However, for assisting the lay patient in understanding his or her radiology examination, it is contemplated to augment the domain-specific ontology content with lay terms that may be more comprehensible to the patient. For example, terms such as “cardiac” may be augmented by “heart”, or so forth.” [0038]
“In improved radiology viewer embodiments disclosed herein, linkages are determined between clinical concepts presented in the radiology report 32 of the radiology examination 22 and related anatomical features in the underlying medical images 30 of the radiology examination 22, and the radiology viewer graphically presents these linkages to the patient or other user in an intuitive fashion. This promotes synthesis of contents of the radiology report with features shown in the underlying medical images. While such assistance may be of value to a radiologist, this assistance is of particular value for lay patient consumption of the radiology examination 22, as the lay patient is generally unfamiliar with clinical terminology, anatomical terminology, and the ways in which various imaging modalities capture anatomical features.” [0024]
Another example of ontology with anatomical features linked…
“The steps S1 and S2 may be performed as pre-processing, e.g. at the time the radiology report 32 is filed by the radiologist. Thereafter, the generated anatomical feature tags may be stored as DICOM tags with the images 30, and the generated clinical concept tags are suitably stored with the radiology report 32. Thereafter, when the patient or other user views the radiology examination 22 using the radiology viewer workstation 10, in a step S3 when the user selects an image location or a report passage, the anatomy corresponding to the image location or the clinical concept contained in the passage are determined by referencing the image tags or report tags, respectively, and the ontology 56 is referenced to identify the corresponding report passage(s) or image anatomical feature(s). Thus, via the common ontology 56, clinical concepts and anatomical features are linked. In some embodiments, the linkage step S3 is extended over multiple radiology examinations to identify relations of different time-points in the different examinations. In this way, due to the link with the images, a patient can follow the genesis and/or evolution of an anatomical feature over multiple time points represented by different radiology examinations, even if the structure is not remarked upon in one or more of the radiology reports. For example, if a tumor appears in the kidney, the patient may look at the changes in the kidney across successive radiology examinations, without having to know how to find the kidney in the images of each examination, via the anatomical feature tags.” [0033]
Example of click (select) a word or sentence (link)..,
“With reference to FIGS. 3 and 4, illustrative processing for executing the step S3 of FIG. 2 are described. FIG. 3 depicts the process for highlighting relevant anatomical feature in the image in response to user selection of a passage of the radiology report. In an operation S10, the user selection of the report passage at the workstation 10 using one of the user interface devices 14, 16, 18 is detected. For example, the user may click on a word or sentence of the report. In an operation S12, the clinical concepts described or mentioned in the selected passage are identified by referencing the contextual tags of the radiology report 32. In an operation S14, the ontology 56 is consulted to identify corresponding anatomical feature(s) that are related to the identified clinical concept. In an operation S16, the image tags are consulted to identify the corresponding anatomical feature(s) in the radiology image. In an operation S18, the anatomical feature(s) are highlighted in the image (portion) displayed in the image window 40, and optionally the selected passage of the report is also highlighted in the report window 42.”
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to select a link as taught by Schadewaldt et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Schadewaldt et al. who teaches the advantages of selecting a link to view an image. Lyman benefits as they also need to view an image in a report.
Regarding claim 90
The system of claim 89, wherein said medical report comprises a lab value associated with said anatomic structure.
Lyman et al. teaches:
Medical report with biopsy (lab) data…
“The diagnosis data can include report data 449 that includes at least one medical report, which can be formatted to include some or all of the medical codes 447, some or all of the natural language text data 448, other diagnosis data 440, full or cropped images slices formatted based on the display parameter data 470 and/or links thereto, full or cropped images slices or other data based on similar scans of the similar scan data 480 and/or links thereto, full or cropped images or other data based on patient history data 430 such as longitudinal data 433 and/or links thereto, and/or other data or links to data describing the medical scan and associated abnormalities. The diagnosis data 440 can also include finalized diagnosis data corresponding to future scans and/or future diagnosis for the patient, for example, biopsy data or other longitudinal data 433 determined subsequently after the scan. The medical report of report data 449 can be formatted based on specified formatting parameters such as font, text size, header data, bulleting or numbering type, margins, file type, preferences for including one or more full or cropped image slices 412, preferences for including similar medical scans, preferences for including additional medical scans, or other formatting to list natural language text data and/or image data, for example, based on preferences of a user indicated in the originating entity data 423 or other responsible user in the corresponding report formatting data.: [0075]
Where values are assigned to diagnosis data…
“…Alternatively or in addition, some or all of the abnormality classification data 445 or other diagnosis data 440 for the previous scan can be assigned values determined based on the abnormality classification data or other diagnosis data determined for the current scan. Such abnormality classification data 445 or other diagnosis data 440 determined for the previous scan can be mapped to the previous scan, and or mapped to the longitudinal data 433, in the database and/or transmitted to a responsible entity via the network.” [0041]
Regarding claim 93
The system of claim 89, wherein said computer-generated finding comprises a diagnosis.
Lyman et al. teaches:
Computer vision for diagnosing…
“A medical scan diagnosing system 108 can be used by hospitals, medical professionals, or other medical entities to automatically produce inference data for given medical scans by utilizing computer vision techniques and/or natural language processing techniques. This automatically generated inference data can be used to generate and/or update diagnosis data or other corresponding data of corresponding medical scan entries in a medical scan database. The medical scan diagnosing system can utilize a medical scan database, user database, and/or a medical scan analysis function database by communicating with the database storage system 140 via the network 150, and/or can utilize another medical scan database, user database, and/or function database stored in local memory.” [0046]
Regarding claim 94
The system of claim 89, wherein said system further comprises an audio detection component configured to detect or record an input.
[No Patentable Weight is given to intended use language of “configured to detect or record an input” as they may never happen.]
Lyman et al. teaches:
A microphone (audio detection component) for input…
“The user can provide input in response to menu data or other prompts presented by the interactive interface via the one or more client input devices 250, which can include a microphone, mouse, keyboard, touchscreen of display device 270 itself or other touchscreen, and/or other device allowing the user to interact with the interactive interface. The one or more processing devices 230 can process the input data and/or send raw or processed input data to the corresponding subsystem, and/or can receive and/or generate new data in response for presentation via the interactive interface 275 accordingly, by utilizing network interface 260 to communicate bidirectionally with one or more subsystems and/or databases of the medical scan processing system via network 150.
Regarding claim 96
The system of claim 89, wherein said computer program causes said processor to analyze said medical image using an image segmentation algorithm.
Lyman et al. teaches:
Example of machine learning algorithm to segregate scans (images)…
“…Each of the plurality of neural network models can be generated based on the same or different learning algorithm that utilizes the same or different features of the medical scans in the corresponding one of the plurality of training sets. The medical scan classifications selected to segregate the medical scans into multiple training sets can be received via the network, for example based on input to an administrator client device from an administrator. The medical scan classifications selected to segregate the medical scans can be automatically determined by the medical scan image analysis system, for example, where an unsupervised clustering algorithm is applied to the original training set to determine appropriate medical scan classifications based on the output of the unsupervised clustering algorithm.” [0128]
Regarding claim 105
The system of claim 89, wherein GUI shows said medical image and said medical report simultaneously.
[No Patentable Weight is given to non-functional descriptive claim language of shows said medical image and said medical report simutaneoulsy.]
Lyman et al. teaches:
Interactive interface (GUI) and display of medical scan and report data…
“…Report data including text describing each of the plurality of abnormalities is generated based on the abnormality data. The visualization and the report data, which can collectively be displayed annotation data, can be transmitted to a client device. A display device associated with the client device can display the visualization in conjunction with the medical scan via an interactive interface, and the display device can further display the report data via the interactive interface.” [0038]
Claims 97-104 and 107-109 are rejected under 35 U.S.C. 103 as being unpatentable over the combined references in section (6) above in further view of Pub. No. US 2019/0333217 to Bronkalla et al.
Regarding claim 97
The system of claim 96, wherein said image segmentation algorithm detects a boundary of said anatomic structure and delineates said boundary of said anatomic structure.
Lyman et al. teaches:
Example of partition scan (image) with anatomical regions (boundary) for head, chest, etc. (delineate boundary)…
“In some embodiments, a medical scan can be automatically pre-processed to partition the medical scan in accordance with multiple anatomical regions included within the medical scan. For example, a full body scan can be partitioned into a set of medical scan portions, where each medical scan portion corresponds to each of a set of anatomical regions. For example, the full body scan can be partitioned into medical scan portions corresponding to the head, chest, arm, leg, etc. for individual labeling. These partitions can each be labeled by the same user or by different users. For example, the medical scan hierarchical labeling system 3002 can perform this pre-processing step prior to transmission of the medical scan to a client device. The different medical scan portions can be sent to different users for labeling based on determining each user has favorable qualification data and/or performance score data for the corresponding anatomical region. The labeling data can be retrieved from all of the users and can be compiled for the original medical scan to be mapped to the medical scan database. In some embodiments, the pre-processing step is performed after a medical scan is retrieved by a client device as part of execution of the labeling application. Each partition can be presented in conjunction with prompt decision trees corresponding to the anatomical region of each partition and/or in conjunction with starting nodes of the prompt decision trees determined based on the anatomical region of each partition.” [0288]
The combined references teach medical image. They do not teach segmentation algorithm and boundary feature.
Bronkalla et al. also in the business of medical image teaches:
Image segmentation process (algorithm) and locate (detect) boundaries in images…
“Similarly, the image study source 104 includes (stored within the memory module 115b) a feature extraction unit 116. In one embodiment, the feature extraction unit 116 is a software program including instructions identifying features in an image, such as anatomical structures or views, such as Parasternal Long Axis that contains several different anatomical structures. In some embodiments, the feature extraction unit 116 identifies image feature using image segmentation. Image segmentation is a process of partitioning a digital image into multiple segments (sets of pixels) to locate objects and boundaries in images. For example, the feature extraction unit 116 may be configured to process an image and determine anatomical localization information using one or more anatomical atlases or from a different localization or recognition approach, such as, for example, a neural network recognition approach. For example, in a chest computed tomography (“CT”) image, anatomical regions, such as the ascending aorta, left ventricle, T3 vertebra, and other regions, can be identified and labeled, such as by using multi-atlas segmentation. Each atlas includes a medical image obtained with a known imaging modality and using a particular imaging procedure and technique wherein each pixel in the medical image is labeled as a particular anatomical structure. Accordingly, an atlas can be used to label anatomical structures in an image.” [0024]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability segment an image as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of segmenting images to detect boundaries.
Regarding claim 98
The system of claim 97, wherein when a user selects said link in said medical report, said computer program causes said processor to display said delineated boundary of said anatomic structure in said medical image.
Lyman et al. teaches:
“…Conversely, in response to receiving the medical report from the report database 2625, the de-identification system can extract the linking identifier from a header, metadata, and/or text body of the medical report, and can query the medical picture archive system 2620 for the corresponding medical scan by indicating the linking identifier in the query. In some embodiments, a mapping of de-identified medical scans to original medical scans, and/or a mapping of de-identified medical reports to original medical reports can be stored in memory 2806. In some embodiments, linking identifiers such as patient ID numbers can be utilized to fetch additional medical scans, additional medical reports, or other longitudinal data corresponding to the same patient.” [0239]
The combined references teach link and boundary. They do not teach select link.
Bronkalla et al. also in the business of link teaches:
Select hyperlink and image is displayed…
“… When the inserted hyperlink is selected, the corresponding image including the matching image feature can be displayed. A similar hyperlink can be inserted (visibly or invisibly overlaid) on the image, so that a user can select the hyperlink while viewing the image to view the corresponding report and, in particular, the corresponding portion, finding, or statement included in the report or additionally to the “prior” comparison studies with or without the item of interest such as an implanted device or lesion. In some embodiments, link insertion may be performed by the viewing system using an already distributed report, which allows the report links to be relative within the viewing system.” [0003]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to select a link as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of selecting a link to view an image. Lyman benefits as they also need to view an image in a report.
Regarding claim 99
The system of claim 97, wherein said anatomic structure comprises an organ.
Lyman et al. teaches:
Brain (organ) from head CT scan, therefore, brain as anatomic structure…
“In some embodiments, the DICOM header is modified based on the annotation data generated in performing the inference function. In particular, a DICOM priority header field can be generated and/or modified automatically based on the severity and/or time-sensitivity of the abnormalities detected in performing the inference function. For example, a DICOM priority header field can be changed from a low priority to a high priority in response to annotation data indicating a brain bleed in the de-identified medical scan of a DICOM image corresponding to a head CT scan, and a new DICOM header that includes the high priority DICOM priority header field can be sent back to the medical picture archive system 2620 to replace or otherwise be mapped to the original DICOM image of the head CT scan.” [0164]
Regarding claim 100
The system of claim 97, wherein said anatomic structure comprises a vertebra.
The combined references teach anatomic structure. They do not teach vertebrae.
Bronkalla et al. also in the business of anatomic structure teaches:
Vertebra…
“… For example, the feature extraction unit 116 may be configured to process an image and determine anatomical localization information using one or more anatomical atlases or from a different localization or recognition approach, such as, for example, a neural network recognition approach. For example, in a chest computed tomography (“CT”) image, anatomical regions, such as the ascending aorta, left ventricle, T3 vertebra, and other regions, can be identified and labeled, such as by using multi-atlas segmentation. Each atlas includes a medical image obtained with a known imaging modality and using a particular imaging procedure and technique wherein each pixel in the medical image is labeled as a particular anatomical structure. Accordingly, an atlas can be used to label anatomical structures in an image.” [0024]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to have structure of vertebrae as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of various features such as vertebrae for imaging.
Regarding claim 101
The system of claim 96, wherein said image segmentation algorithm detects a boundary of a feature of said anatomic structure and delineates said boundary of said feature.
Lyman et al. teaches:
Scan (image) anatomical region (boundary) indicating (delineates) chest, head, etc…
“Scan classifier data 420 can indicate classifying data of the medical scan. Scan classifier data can include scan type data 421, for example, indicating the modality of the scan. The scan classifier data can indicate that the scan is a CT scan, x-ray, Mill, PET scan, Ultrasound, EEG, mammogram, or other type of scan. Scan classifier data 420 can also include anatomical region data 422, indicating for example, the scan is a scan of the chest, head, right knee, or other anatomical region…” [0063]
The combined references teach medical image. They do not teach segmentation algorithm detect boundary feature.
Bronkalla et al. also in the business of medical image teaches:
Image segmentation process (algorithm) and locate (detect) boundaries in images…
“Similarly, the image study source 104 includes (stored within the memory module 115b) a feature extraction unit 116. In one embodiment, the feature extraction unit 116 is a software program including instructions identifying features in an image, such as anatomical structures or views, such as Parasternal Long Axis that contains several different anatomical structures. In some embodiments, the feature extraction unit 116 identifies image feature using image segmentation. Image segmentation is a process of partitioning a digital image into multiple segments (sets of pixels) to locate objects and boundaries in images. For example, the feature extraction unit 116 may be configured to process an image and determine anatomical localization information using one or more anatomical atlases or from a different localization or recognition approach, such as, for example, a neural network recognition approach. For example, in a chest computed tomography (“CT”) image, anatomical regions, such as the ascending aorta, left ventricle, T3 vertebra, and other regions, can be identified and labeled, such as by using multi-atlas segmentation. Each atlas includes a medical image obtained with a known imaging modality and using a particular imaging procedure and technique wherein each pixel in the medical image is labeled as a particular anatomical structure. Accordingly, an atlas can be used to label anatomical structures in an image.” [0024]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability segment an image as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of segmenting images to detect boundaries.
Regarding claim 102
The system of claim 101, wherein when said user selects said feature in said medical image, said computer program causes said processor to display a field of said medical report corresponding to said feature.
Lyman et al. teaches:
“FIG. 15P presents a view of a chest x-ray presented via the interface before a user identifies regions of interest, and FIG. 15Q presents a view of the chest x-ray via the interface after the user identifies regions of interest of multiple abnormalities, indicated by seven polygons 1022….” [0430]
The combined references teach link. They do not teach field corresponding to a feature.
Bronkalla et al. also in the business of link teaches:
Example of selecting hyperlink on image to view corresponding portion (field)…
“… When the inserted hyperlink is selected, the corresponding image including the matching image feature can be displayed. A similar hyperlink can be inserted (visibly or invisibly overlaid) on the image, so that a user can select the hyperlink while viewing the image to view the corresponding report and, in particular, the corresponding portion, finding, or statement included in the report or additionally to the “prior” comparison studies with or without the item of interest such as an implanted device or lesion. In some embodiments, link insertion may be performed by the viewing system using an already distributed report, which allows the report links to be relative within the viewing system.” [0003]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to display a field as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of a field for providing information.
Regarding claim 103
The system of claim 89, wherein said medical image is tagged with said link such that selection of said link retrieves said medical report for viewing.
The combined references teach link. They do not teach tag for selecting.
Bronkalla et al. also in the business of link teaches:
Example of selecting hyperlink (tag) and select…
“… When the inserted hyperlink is selected, the corresponding image including the matching image feature can be displayed. A similar hyperlink can be inserted (visibly or invisibly overlaid) on the image, so that a user can select the hyperlink while viewing the image to view the corresponding report and, in particular, the corresponding portion, finding, or statement included in the report or additionally to the “prior” comparison studies with or without the item of interest such as an implanted device or lesion. In some embodiments, link insertion may be performed by the viewing system using an already distributed report, which allows the report links to be relative within the viewing system.” [0003]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to tag for selecting a link as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of selecting a hyperlink to view an image. Lyman benefits as they also use links to access images.
Regarding claim 104
The system of claim 89, wherein when a user selects said anatomic structure in said medical image, said computer program causes said processor to display a field of said medical report corresponding to said anatomic structure.
The combined references teach link. They do not teach field corresponding to a feature.
Bronkalla et al. also in the business of link teaches:
Example of selecting hyperlink on image to view corresponding portion (field)…
“… When the inserted hyperlink is selected, the corresponding image including the matching image feature can be displayed. A similar hyperlink can be inserted (visibly or invisibly overlaid) on the image, so that a user can select the hyperlink while viewing the image to view the corresponding report and, in particular, the corresponding portion, finding, or statement included in the report or additionally to the “prior” comparison studies with or without the item of interest such as an implanted device or lesion. In some embodiments, link insertion may be performed by the viewing system using an already distributed report, which allows the report links to be relative within the viewing system.” [0003]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to have a field as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of a field to view an image and the benefit of additional information.
Regarding claim 107
The system of claim 89, wherein said computer program causes said processor to determine a position of a user-controlled indicator at a coordinate of said medical image, and generate said link corresponding to said coordinate of said medical image.
The combined references teach image and link. They do not teach link with coordinate.
Bronkalla et al. also in the business of image and link teaches:
Example of insert (generate) link into an image with coordinates of liver (medical image)…
“Alternatively, in response to the first plurality of image features and the second plurality of image features including a matching feature (“Yes” at block 220), a data link is created between the medical image report and at least one image (at block 225) and is inserted into the medical image report (at block 230). The data link may be inserted at a location of the matching image feature, such as a matching medical structure. For example, if both the report and an image in the associated image study reference a “liver,” a data link to an image and the structure or coordinates illustrating a liver may be inserted into the report, such as in a subsection of the report relating to the liver. The insertion of these data links may be especially useful to a lay person or referring physician who may not be adept at discerning the various structures especially in cross-sectional anatomy, such as, for example, CT or MR. For example, in the above example of a “liver,” the right image or image view may be selected but optionally a highlight of the liver, such as an outline around the liver (displayed statically or momentary, such as displayed for three seconds and then disappeared). In some embodiments, the anatomy may also be shown in multiple views whether these views were present in the original study. For example, with reference to L3 a user may be interested in viewing both the axial view and the sagittal view simultaneously.” [0035]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use coordinates with an image as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of using coordinates with images .
Regarding claim 108
The system of claim 89, wherein said medical report comprises one or more sentences or phrases describing or assessing said anatomic structure.
The combined references teach medical images. They do not teach sentence or phrase for describing or accessing
Bronkalla et al. also in the business of medical images teaches:
Phrase and display views of the brain (accessing anatomic structure)…
“…For example, the rules may define which images or portions of images (or which portions of the medical image report) are more or less relevant to an identified image feature. For example, when a medical image report contains the phrase “tumor shown in sagittal view of brain,” the rules may specify that sagittal views of the brain may be displayed before other views of the brain. The rules may be set by a user or may be automatically learned using one or more machine learning techniques. For example, if a reading physician frequently uses a certain view of an organ when reviewing medical image reports, a rule may be generated that prioritizes this view when creating a data link.” [0042]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use phrases as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of phrases for viewing images.
Regarding claim 109
The system of claim 108, wherein said computer program causes said processor to generate said one or more sentences or phrases describing or assessing said anatomic structure.
The combined references teach medical images. They do not teach generate sentences or phrases.
Bronkalla et al. also in the business of medical images teaches:
NLP processes (computer program) text in image reports to identify keywords such as medical features (generate phrases) such as organs, etc. (anatomic structure)…
“As illustrated in FIG. 1, the report source 102 includes (stored within the memory module 113b) a natural language processing (“NLP”) unit 114. In one embodiment, the NLP unit 114 is a software program including instructions for processing natural language (textual language) included in a medical image report. For example, the NLP unit 114 processes text included in medical image reports to identify particular keywords, including, for example, image features such as medical features (for example, organs, organ function or other descriptions of pathology, medical devices or implants, and the like) and user artifacts (for example, measurements or observations). The extracted features may be mapped to a general ontology such as, for example, UMLS (Unified Medical Language System—Metathesaurus by US National Library of Medicine), SNOMED-CT (Systematized Nomenclature of Medicine-Clinical Terms), as well as common hierarchies and synonyms for various features of the extracted features. For example, a hip implant may be referred to as the hip prosthesis with sub-components, such as acetabular cup, femoral head or ball, and stem and any of these subcomponents could be key words for the detected image components.” [0023]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to generate phrases as taught by Bronkalla et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Bronkalla et al. who teaches the advantages of phrases or describing structures
Claim 106 is rejected under 35 U.S.C. 103 as being unpatentable over the combined references in section (6) above in further view of Pub. No. US 2019/0139642 to Roberge et al.
Regarding claim 106
The system of claim 89, further comprising an eye-tracking component coupled to said processor and configured to track a position or movement of an eye of a user viewing said image.
The combined references teach medical images. They do not teach eye tracking.
Roberge et al. also in the business of medical images teaches:
Gaze data recorded by an eye tracking device and movement…
“FIG. 8 illustrates the use of image and gaze data recorded by an eye tracking device to serve as an input into a matching algorithm to create a report. The process of reviewing images typically means that the reader is looking at them on a display device and, more particularly, focusing on a specific region of an image as one or more conclusions are reached. Eye tracking technology can identify the specific region by collecting gaze data using either a remote or head mounted eye tracking device. Specifically, eye tracking technology can record a user's point of gaze and movement on a 2D screen or in 3D environments based on corneal reflections.” [0111]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use eye-tracking as taught by Roberge et al. since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Roberge et al. who teaches the advantages of eye-tracking for viewing images.
Claims 110 and 111 are rejected under 35 U.S.C. 103 as being unpatentable over the combined references in section (6) above in further view of Pub. No. US 2016/0246946 to Haley
Regarding claim 110
The system of claim 89, wherein, prior to (v), said GUI displays a pull-down menu comprising a word or phrase related to said computer-generated finding that a user can select from to describe said anatomic structure.
The combined references teach anatomic structure. They do not teach menu.
Haley also in the business of anatomic structure teaches:
Ontologies with similar meaning or concept…
“Ontologies may provide packaging of knowledge and formal definitions of the knowledge for specific contexts (e.g., orthopedic surgery model). The ontologies provide the definitions at different levels of granularity. Medical ontologies provide information associated with one or more diseases and numerous medically relevant concepts (e.g., laboratory and diagnostic procedures; physiologic, biologic, genetic, molecular functions; organs and body parts; diseases, symptoms, and medical findings). Different relationships between concepts are reflected by the medical ontology. Concepts are organized within hierarchies that provide “IS A” type relationships (e.g., specific types of a disease are organized under more general types of a disease). Related morphologies (e.g., inflammation) and body location are other types of relationships in the medical ontology. Medical ontologies may also contain various terms associated with a medical concept representing the same (or similar) meaning for the concept.”[0015]
Site, body location and morphology relationships are linked…
“Each base model is an ontology. Terms with corresponding concepts and relationships are logically connected. Any ontology and corresponding format may be used. Medical ontologies are provided in a structured format, with different links between different terms. Direct or “IS A” type relationships are linked. Site or body location relationships are linked. Morphology relationships are linked. Other relationships may include a cause, an effect, a symptom, a sign, drugs, tests, or a related disease. For example, diabetes may be shown as related to or connected with heart failure, but is not the same or an “IS A” relation. Diabetes may be related since a person with diabetes is more likely to have heart failure than a person without diabetes.” [0029]
Receiving input of structure text (word) using drop down list (pull-down menu)…
“The processor 20 operates to receive input information from a user. A writer, such as a physician, nurse, patient or other person, inputs an expression into the computer 14. The expression may be free text or selection of structured text (e.g., selecting an input from a drop down list). Using a user input device, such as a keyboard and/or mouse, the processor receives input information. The input information is entered by a user known to the processor 20 or database 18, such as based on log-in. The input information is specific to a patient, such as for a total knee replacement patient. The input information is for a communication, such as for a message from a physician to a patient or a data entry into the medical record for the patient to be viewed at a later time.” [0036] Inherent with selection of structure text is the text being generated.
Anatomical terms…
“Each person has different models of the world based on life experience. The personalized medical model is created to approximate a person's level of knowledge for a relevant context/subject matter (e.g., total knee replacement). By classifying and linking information about a person (patient or provider) to relevant general role-based knowledge models, the processor automatically builds a more personalized model. The mined information for a user is used to create the personalized model or ontology. The user is profiled as having different levels of knowledge for different areas of a given field or subject. For example, a patient may have layman's knowledge for surgery, but greater knowledge regarding knee anatomy based on a previous diagnosis of a knee condition and therapy without having had surgery. The processor 20 uses this profile to link surgical terms to a layman's surgical ontology but knee anatomical terms to a therapist's total knee replacement ontology.” [0052]
“The linked definitions from various base ontologies indicate likely or profiled knowledge and corresponding definitions of terms in a common terminology (ontology). The base ontologies provide concepts and relationships appropriate for each linked definition or term. Where a subset linked from one base ontology includes a same term or definition as a subset linked from another base ontology, a ranking system may be used. For example, rules weight or define which ontology to use for that term. To increase understanding, the least granular or simpler ontology may be used. Alternatively, an education level or identified role may dictate using a specialized ontology instead of a more generalized ontology definition. This may result in more granular definitions appropriate for expertise of the user, allowing for more exact communication.” [0053]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use a drop down list (menu) as taught by Haley since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Haley who teaches the advantages of using drop down list (menu) for inputting information.
Regarding claim 111
The system of claim 110, wherein said ontology is used to generate said word or phrase in said pull-down menu
The combined references teach anatomic structure. They do not ontology with word or phrase.
Haley also in the business of anatomic structure teaches:
Ontologies with similar meaning or concept…
“Ontologies may provide packaging of knowledge and formal definitions of the knowledge for specific contexts (e.g., orthopedic surgery model). The ontologies provide the definitions at different levels of granularity. Medical ontologies provide information associated with one or more diseases and numerous medically relevant concepts (e.g., laboratory and diagnostic procedures; physiologic, biologic, genetic, molecular functions; organs and body parts; diseases, symptoms, and medical findings). Different relationships between concepts are reflected by the medical ontology. Concepts are organized within hierarchies that provide “IS A” type relationships (e.g., specific types of a disease are organized under more general types of a disease). Related morphologies (e.g., inflammation) and body location are other types of relationships in the medical ontology. Medical ontologies may also contain various terms associated with a medical concept representing the same (or similar) meaning for the concept.”[0015]
Site, body location and morphology relationships are linked…
“Each base model is an ontology. Terms with corresponding concepts and relationships are logically connected. Any ontology and corresponding format may be used. Medical ontologies are provided in a structured format, with different links between different terms. Direct or “IS A” type relationships are linked. Site or body location relationships are linked. Morphology relationships are linked. Other relationships may include a cause, an effect, a symptom, a sign, drugs, tests, or a related disease. For example, diabetes may be shown as related to or connected with heart failure, but is not the same or an “IS A” relation. Diabetes may be related since a person with diabetes is more likely to have heart failure than a person without diabetes.” [0029]
Receiving input of structure text (word) using drop down list…
“The processor 20 operates to receive input information from a user. A writer, such as a physician, nurse, patient or other person, inputs an expression into the computer 14. The expression may be free text or selection of structured text (e.g., selecting an input from a drop down list). Using a user input device, such as a keyboard and/or mouse, the processor receives input information. The input information is entered by a user known to the processor 20 or database 18, such as based on log-in. The input information is specific to a patient, such as for a total knee replacement patient. The input information is for a communication, such as for a message from a physician to a patient or a data entry into the medical record for the patient to be viewed at a later time.” [0036] Inherent with selection of structure text is the text being generated.
Anatomical terms…
“Each person has different models of the world based on life experience. The personalized medical model is created to approximate a person's level of knowledge for a relevant context/subject matter (e.g., total knee replacement). By classifying and linking information about a person (patient or provider) to relevant general role-based knowledge models, the processor automatically builds a more personalized model. The mined information for a user is used to create the personalized model or ontology. The user is profiled as having different levels of knowledge for different areas of a given field or subject. For example, a patient may have layman's knowledge for surgery, but greater knowledge regarding knee anatomy based on a previous diagnosis of a knee condition and therapy without having had surgery. The processor 20 uses this profile to link surgical terms to a layman's surgical ontology but knee anatomical terms to a therapist's total knee replacement ontology.” [0052]
“The linked definitions from various base ontologies indicate likely or profiled knowledge and corresponding definitions of terms in a common terminology (ontology). The base ontologies provide concepts and relationships appropriate for each linked definition or term. Where a subset linked from one base ontology includes a same term or definition as a subset linked from another base ontology, a ranking system may be used. For example, rules weight or define which ontology to use for that term. To increase understanding, the least granular or simpler ontology may be used. Alternatively, an education level or identified role may dictate using a specialized ontology instead of a more generalized ontology definition. This may result in more granular definitions appropriate for expertise of the user, allowing for more exact communication.” [0053]
It would have been obvious to one of ordinary skill in the art before the effective filing date to include in the method and system of the combined references the ability to use a drop down list (menu) and ontology as taught by Haley since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Further motivation is provided by Haley who teaches the advantages of using drop down list (menu) for inputting information and ontology for similar terms.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH BARTLEY whose telephone number is (571)272-5230. The examiner can normally be reached Mon-Fri: 7:30 - 4:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHAHID MERCHANT can be reached at (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH BARTLEY/Primary Examiner, Art Unit 3684