DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
This communication is in response to the amendment filed 9/14/25. Claim 1 is canceled. Claims 2, 3, 9, 11, 12, and 16-18 have been amended. Claims 2-21 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 2-21 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1:
Claims 2-21 are directed to a method (i.e., a process). Accordingly, claims 2-21 are all within at least one of the four statutory categories.
Step 2A - Prong One:
Regarding Prong One of Step 2A, the claim limitations are to be analyzed to determine whether, under their broadest reasonable interpretation, they “recite” a judicial exception or in other words whether a judicial exception is “set forth” or “described” in the claims. An “abstract idea” judicial exception is subject matter that falls within at least one of the following groupings: a) certain methods of organizing human activity, b) mental processes, and/or c) mathematical concepts.
Independent claim 2 includes limitations that recite at least one abstract idea. Specifically, independent claim 2 recites:
2. A method, comprising:
receiving, at a computing system, a set of radiology images;
using a set of models, determining a set of findings associated with the set of radiology images;
determining a set of annotations based on the set of findings;
displaying, at a display of Picture Archiving and Communication System (PACS) viewer of a radiology workstation, the set of radiology images;
in response to an action comprising hovering a cursor proximal to a location in the PACS viewer, on a radiology image of the set of radiology images, that is associated with an annotation of the set of annotations, displaying, at the PACS viewer of the radiology workstation, the annotation for a duration of time;
transmitting the set of findings to a platform comprising a voice recognition system;
transforming the set of findings into text;
integrating the text into a radiologist report; and
within the platform, automatically filling in a section of the radiologist report with data associated with the set of findings.
Independent claim 12 includes limitations that recite at least one abstract idea. Specifically, independent claim 12 recites:
12. A method, comprising:
receiving, at a computing system, a set of radiology images;
with a set of models, determining a set of findings associated with the set of radiology images;
determining a set of annotations based on the set of findings;
displaying, at a display of a radiology workstation, the set of radiology images;
in response to detecting a first action from a user, the first action associated with an annotation of the set of annotations, displaying the annotation at the display of the radiology workstation for a duration of time;
in response to detecting a second action from the user while the annotation of the set of annotations is displayed, increasing the duration of time
transmitting the set of findings to a platform comprising a voice recognition system;
transforming the set of findings into text;
integrating the text into a radiologist report in conjunction with information received by dictation using the voice recognition system, with automatic insertion of the text into the radiologist report initiated with a hotkey; and
within the platform, automatically filling in a section of the radiologist report with data associated with the set of findings.
The Examiner submits that the foregoing underlined limitations constitute “certain methods of organizing human activity” because receiving a set of radiology images; using a set of models, determining a set of findings associated with the set of radiology images; determining a set of annotations based on the set of findings; displaying the set of radiology images; displaying the annotation for a duration of time; with a set of models, determining a set of findings associated with the set of radiology images; in response to detecting a first action from a user, the first action associated with an annotation of the set of annotations, displaying the annotation for a duration of time; in response to detecting a second action from the user while the annotation of the second subset of annotations is displayed, increasing the duration of time; transforming the set of findings into text; integrating the text into a radiologist report in conjunction with information received by dictation with insertion of the text into the radiologist report; and filling in a section of the radiologist report with data associated with the set of findings amount to managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions), at the currently claimed high level of generality.
Accordingly, the claim recites at least one abstract idea.
Step 2A - Prong Two:
Regarding Prong Two of Step 2A, it must be determined whether the claim as a whole integrates the abstract idea into a practical application. It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
The limitations of claims 2 and 12, as drafted, is a process that, under its broadest reasonable interpretation, covers certain methods of organizing human activity but for the recitation of generic computer components. That is, other than reciting a computing system, a workstation, PACS viewer, a platform, a voice recognition system, and a display to perform the limitations, nothing in the claim elements precludes the steps from practically being certain methods of organizing human activity. If a claim limitation, under its broadest reasonable interpretation, covers certain methods of organizing human activity but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the computing system, workstation, PACS viewer, platform, voice recognition system, and display are recited at a high-level of generality (i.e., as generic computer components performing generic computer functions of receiving data, analyzing/determining data, displaying data, detecting actions, transmitting data, transforming data, integrating data, using a hotkey, and inserting data) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (see MPEP § 2106.05). Their collective functions merely provide conventional computer implementation.
Claims 3-11 and 13-21 are ultimately dependent from Claim(s) 2 and 12 and include all the limitations of Claim(s) 2 and 12. Therefore, claim(s) 3-11 and 13-21 recite the same abstract idea. Claims 3-11 and 13-21 describe further limitations regarding types of images, generating a predictive measurement, wherein the set of models comprises a set of deep learning models, selecting the set of models based on an anatomical feature, wherein the set of findings correspond to types, determining findings/annotations, displaying annotations, a default duration of annotation display, displaying a second set of radiology images associated with the patient, wherein the second set of radiology images is recorded prior to the set of radiology images, wherein the set of radiology images comprises Digital Imaging and Communications in Medicine (DICOM) images, requiring more actions to display a normal finding as compared to an abnormal finding, hovering a cursor proximal to a voxel of a radiology image in the set of radiology images, producing a set of overlays, adjusts the set of overlays, toggling off display of an annotation, and a patient privacy program. These are all just further describing the abstract idea recited in Claim(s) 2 and 12, without adding significantly more.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
Step 2B:
Regarding Step 2B, independent claims 2 and 12 do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for reasons the same as those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application.
Regarding the additional limitations directed to detecting actions and a computing system receiving images and displaying the images at a workstation, and transmitting findings to a platform, all of which the Examiner submits merely add insignificant extra-solution activity to the abstract idea or are claimed in a merely generic manner (e.g., at a high level of generality), the Examiner further submits that such steps are not unconventional as they merely consist of actions similar to a Web browser’s back and forward button functionality and receiving and transmitting data over a network and/or storing and retrieving information in memory. See MPEP 2106.05(d)(II).
The dependent claims do not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the dependent claims do not integrate the at least one abstract idea into a practical application.
Therefore, claims 2-21 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 16 recites the limitation "the PACS viewer" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reicher et al. (US 2016/0364862 A1) in view of Reicher et al. (US 10,127,662 B1, hereinafter “Reicher 2”), and further in view of Shreiber et al. (US 2010/0114597 A1).
(A) Referring to claim 2, Reicher discloses A method, comprising (abstract of Reicher):
receiving, at a computing system, a set of radiology images (see Fig. 5, para. 44 & 69 of Reicher; the method 500 includes receiving, with the learning engine 110, an image for analysis (at block 506). The learning engine 110 may receive the image for analysis from one or more of the data sources 112. The memory 114 of each data source 112 may store medical data, such as medical images (i.e., clinical images) and associated data (e.g., reports, metadata, and the like). For example, the data sources 112 may include a picture archiving and communication system (“PACS”), a radiology information system (“RIS”), an electronic medical record (“EMR”), a hospital information system (“HIS”), an image study ordering system, and the like.);
using a set of models, determining a set of findings associated with the set of radiology images (para. 69 & 70 of Reicher; The learning engine 110 may use the supplemental data to determine what models should be used to process the image. For example, the learning engine 110 may process images associated with detecting breast cancer using different models than images associated with detecting fractures. The method 500 also includes automatically processing, with the learning engine 110, the received image using the model to generate a diagnosis (i.e., a result) for the image (at block 508). The learning engine 110 may output or provide a diagnosis in various forms.);
determining a set of annotations based on the set of findings (para. 71, 96, & 81 of Reicher; the diagnosis generated by the learning engine 110 includes both an annotated image and corresponding diagnostic information. The workstation may be configured to present radiologists with annotated images generated by the learning engine 110 in a format that allows the radiologist to edit the images, including the annotations, within a viewer (e.g., through a medical image annotation tool). In some embodiments, edits made by the radiologist to the annotated images are fed back to the learning engine 110, which uses the edits to improve the developed models.);
displaying, at a display of Picture Archiving and Communication System (PACS) viewer of a radiology workstation, the set of radiology images (para. 46, 48, 71, and 75 of Reicher; the workstation may be configured to present radiologists with annotated images generated by the learning engine 110 in a format that allows the radiologist to edit the images, including the annotations, within a viewer (e.g., through a medical image annotation tool). The diagnostic information associated with the graphical marker may be structured or unstructured and may be in the form of text (generated manually by a diagnosing physician, automatically by a computer system, or a combination thereof), audio, video, images, and the like. The graphical marker may be created manually by a diagnosing physician, such as a radiologist, a cardiologist, a physician's assistant, a technologist, and the like or automatically by a computer system, such as a PACS. Similarly, the associated diagnostic information may be created manually by a diagnosing physician, such as a radiologist, a cardiologist, a physician's assistant, a technologist, and the like or automatically by a computer system, such as a PACS.).
Reicher does not disclose in response to an action comprising hovering a cursor proximal to a location in the PACS viewer, on a radiology image of the set of radiology images, that is associated with an annotation of the set of annotations, displaying, at the PACS viewer of the radiology workstation, the annotation for a duration of time; transmitting the set of findings to a platform comprising a voice recognition system; transforming the set of findings into text; integrating the text into a radiologist report; and within the platform, automatically filling in a section of the radiologist report with data associated with the set of findings.
Reicher 2 discloses in response to an action comprising hovering a cursor proximal to a location in the PACS viewer, on a radiology image of the set of radiology images, that is associated with an annotation of the set of annotations, displaying, at the PACS viewer of the radiology workstation, the annotation for a duration of time; (Fig. 7, col. 17, lines 5-27, col. 27, lines 27-36, col. 29, line 60 – col. 30, line 22, col. 30, line 61 – col. 31, line 13, and col. 34, lines 3-30 of Reicher 2; the user may hover over the “m” (with, e.g., a mouse cursor as shown) to cause the system to provide details of the measurement, as shown in image 722, and then indicate whether or not the annotation (in this example, the measurement) should be applied to the new image such as by double clicking, right clicking to see options, and/or the like. Another option that may be available to the user is to adjust a bi-linear measurement tool from a previous measurement to match the current area of interest, such as a tumor that has changed in size from a previous exam. Additionally, the user may add additional annotations to the image. In some embodiments, after selection of an indicated annotation, the user may optionally modify the selected annotation (e.g., by changing a measurement). The user may hover a mouse cursor over the “*” and preview the annotation that may be transferred to the matching image 920. If the user clicks on the “*”, it is replaced with the arrow in this example. A similar process can apply to measurements and other types of annotations. Referring to FIG. 12B, the user has provided a user input to select the “M” indicator 1210 (e.g., the user may position a cursor over the indicator and/or may touch the indicator via a touch screen user interface). In response, the system provides a list of other matched exams, and their associated exam dates, with matching images displayed as buttons 1220, 1222, and 1224.).
Schreiber discloses transmitting the set of findings to a platform comprising a voice recognition system; transforming the set of findings into text; integrating the text into a radiologist report; and within the platform, automatically filling in a section of the radiologist report with data associated with the set of findings (para. 93 & 98 of Schreiber; The MMI 203 optionally allows the user to fill out sections of the reports by means of typing, selecting, and/or dictating. The MMI 203 optionally comprises a microphone that allows the user to dictate his diagnosis and optionally a voice recognition module for translating the intercepted dictation into a text format. Optionally, to initiate dictation, the user selects the section in the report template to which she want to relate. Optionally, the microphone is used for intercepting the dictation and the report generation module 202 is used for storing the intercepted dictation in association with the report, for example with one or more related sections of the report. The report generation module 202 includes a voice recognition module and/or a text analysis module which are used for identifying references to anatomical sites in the inputs of the user. In such an embodiment, the report generation module 202 may select segments of the imagining study according to the identified anatomical sites and add them to the report in association with a respective section in the diagnosis.).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Reicher 2 and Schreiber within Reicher. The motivation for doing so would have been to provide options to the user (col. 29, line 60 – col. 30, line 22 of Reicher 2) and so that the reports are adjusted according to the diagnosis (para. 59 of Schreiber).
(B) Referring to claim 3, Reicher discloses wherein the set of radiology images comprises positron emission tomography (PET)/computed tomography (CT) images and mammography images, and wherein determining the set of findings comprises generating, from a predictive algorithm, a predictive measurement of tumor growth (para. 95, 37, 74, 76, 78, 60, and 51 Reicher).
(C) Referring to claim 4, Reicher discloses wherein the set of models comprises a set of deep learning models (para. 8 of Reicher).
(D) Referring to claim 5, Reicher discloses further comprising selecting the set of models based on an anatomical feature associated with the set of radiology images (para. 69 of Reicher).
(E) Referring to claim 6, Reicher discloses wherein the set of findings corresponds to a first finding type, the method further comprising: using the set of models, determining a second set of findings associated with the set of radiology images, wherein the second set of findings corresponds to a second finding type (para. 8, 37, & 74 of Reicher); determining a second set of annotations based on the second set of findings (para. 79-81 & 96 of Reicher); and displaying the second set of annotations, wherein each of the second set of annotations is automatically continuously displayed during display of an associated radiology image (para. 137, 98, & 71 of Reicher).
(F) Referring to claim 7, Reicher discloses wherein, for each annotation of the set of annotations and the second set of annotations, a default duration of annotation display across the set of radiology images is determined based on the respective finding type (para. 101 of Reicher).
(G) Referring to claim 8, Reicher discloses wherein the set of radiology images is associated with a patient, further comprising displaying a second set of radiology images associated with the patient, wherein the second set of radiology images is recorded prior to the set of radiology images (para. 57-59 of Reicher).
(H) Referring to claim 9, Reicher discloses contemporaneously with determining the set of findings associated with the set of radiology images, determining a second set of findings associated with the second set of radiology images using the set of models (para. 37, 56, 70 & 71 of Reicher).
(I) Referring to claim 10, Reicher discloses wherein the set of radiology images comprises Digital Imaging and Communications in Medicine (DICOM) images (para. 52 & 61 of Reicher).
(J) Referring to claim 11, Reicher discloses further comprising requiring more actions to display a normal finding as compared to an abnormal finding represented in the set of radiology images at the PACS viewer (para. 74, 75, 96, and 134 of Reicher).
Claim(s) 12 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reicher et al. (US 2016/0364862 A1) in view of Reicher et al. (US 10,127,662 B1, hereinafter “Reicher 2”), in view of in view of Shreiber et al. (US 2010/0114597 A1), and further in view of Weeks (US 2015/0340036 A1).
(A) Referring to claim 12, A method, comprising (abstract of Reicher):
receiving, at a computing system, a set of radiology images (see Fig. 5, para. 44 & 69 of Reicher; the method 500 includes receiving, with the learning engine 110, an image for analysis (at block 506). The learning engine 110 may receive the image for analysis from one or more of the data sources 112. The memory 114 of each data source 112 may store medical data, such as medical images (i.e., clinical images) and associated data (e.g., reports, metadata, and the like). For example, the data sources 112 may include a picture archiving and communication system (“PACS”), a radiology information system (“RIS”), an electronic medical record (“EMR”), a hospital information system (“HIS”), an image study ordering system, and the like.);
with a set of models, determining a set of findings associated with the set of radiology images (para. 69 & 70 of Reicher; The learning engine 110 may use the supplemental data to determine what models should be used to process the image. For example, the learning engine 110 may process images associated with detecting breast cancer using different models than images associated with detecting fractures. The method 500 also includes automatically processing, with the learning engine 110, the received image using the model to generate a diagnosis (i.e., a result) for the image (at block 508). The learning engine 110 may output or provide a diagnosis in various forms.);
determining a set of annotations based on the set of findings (para. 71, 96, & 81 of Reicher; the diagnosis generated by the learning engine 110 includes both an annotated image and corresponding diagnostic information. The workstation may be configured to present radiologists with annotated images generated by the learning engine 110 in a format that allows the radiologist to edit the images, including the annotations, within a viewer (e.g., through a medical image annotation tool). In some embodiments, edits made by the radiologist to the annotated images are fed back to the learning engine 110, which uses the edits to improve the developed models.);
displaying, at a display of a radiology workstation, the set of radiology images (para. 71 of Reicher; the workstation may be configured to present radiologists with annotated images generated by the learning engine 110 in a format that allows the radiologist to edit the images, including the annotations, within a viewer (e.g., through a medical image annotation tool).).
Reicher does not expressly disclose in response to detecting a first action from a user, the first action associated with an annotation of the set of annotations, displaying the annotation at the display of the radiology workstation for a duration of time; in response to detecting a second action from the user while the annotation of the set of annotations is displayed, increasing the duration of time; transmitting the set of findings to a platform comprising a voice recognition system; transforming the set of findings into text; integrating the text into a radiologist report in conjunction with information received by dictation using the voice recognition system, with automatic insertion of the text into the radiologist report initiated with a hotkey; and within the platform, automatically filling in a section of the radiologist report with data associated with the set of findings.
Reicher 2 discloses in response to detecting a first action from a user, the first action associated with an annotation of the set of annotations, displaying the annotation at the display of the radiology workstation for a duration of time; and in response to detecting a second action from the user while the annotation of the set of annotations is displayed, increasing the duration of time (Fig. 7, col. 29, line 60 – col. 30, line 22, col. 30, line 61 – col. 31, line 13, and col. 34, lines 3-30 of Reicher 2; the user may hover over the “m” (with, e.g., a mouse cursor as shown) to cause the system to provide details of the measurement, as shown in image 722, and then indicate whether or not the annotation (in this example, the measurement) should be applied to the new image such as by double clicking, right clicking to see options, and/or the like. Another option that may be available to the user is to adjust a bi-linear measurement tool from a previous measurement to match the current area of interest, such as a tumor that has changed in size from a previous exam. Additionally, the user may add additional annotations to the image. In some embodiments, after selection of an indicated annotation, the user may optionally modify the selected annotation (e.g., by changing a measurement).).
Schreiber discloses transmitting the set of findings to a platform comprising a voice recognition system; transforming the set of findings into text; integrating the text into a radiologist report in conjunction with information received by dictation using the voice recognition system, with automatic insertion of the text into the radiologist report; and within the platform, automatically filling in a section of the radiologist report with data associated with the set of findings (para. 93 & 98 of Schreiber; The MMI 203 optionally allows the user to fill out sections of the reports by means of typing, selecting, and/or dictating. The MMI 203 optionally comprises a microphone that allows the user to dictate his diagnosis and optionally a voice recognition module for translating the intercepted dictation into a text format. Optionally, to initiate dictation, the user selects the section in the report template to which she want to relate. Optionally, the microphone is used for intercepting the dictation and the report generation module 202 is used for storing the intercepted dictation in association with the report, for example with one or more related sections of the report. The report generation module 202 includes a voice recognition module and/or a text analysis module which are used for identifying references to anatomical sites in the inputs of the user. In such an embodiment, the report generation module 202 may select segments of the imagining study according to the identified anatomical sites and add them to the report in association with a respective section in the diagnosis.).
Weeks discloses insertion of the text initiated with a hotkey (para. 34, 38, & 40 of Weeks; the user may position a cursor and select a hotkey that inserts text data corresponding to a predetermined segment into a text area).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Reicher 2, Schreiber and Weeks within Reicher. The motivation for doing so would have been to provide options to the user (col. 29, line 60 – col. 30, line 22 of Reicher 2), so that the reports are adjusted according to the diagnosis (para. 59 of Schreiber), and to automatically insert text data into the appropriate fields (para. 40 of Weeks).
(B) Referring to claim 14, Reicher discloses further comprising producing a set of overlays associated with the set of radiology images based on the set of annotations, wherein the set of radiology images is received from a Radiology Information System (RIS), wherein the set of overlays are transmitted to the RIS, wherein the RIS transmits the set of overlays to a Picture Archiving and Communication System (PACS) (para. 44, 71, 72, 79, 91, 140, and 46 of Reicher).
(C) Referring to claim 15, Reicher discloses wherein the RIS adjusts the set of overlays (para. 44, 71, & 72 of Reicher).
(D) Referring to claim 16, Reicher discloses further comprising requiring more actions to display a normal finding as compared to an abnormal finding represented in the set of radiology images at the PACS viewer (para. 74, 75, 96, and 134 of Reicher).
(E) Referring to claim 17, Reicher discloses wherein the set of radiology images comprises positron emission tomography (PET)/computed tomography (CT) images and mammography images (para. 95, 37, and 84 of Reicher).
(F) Referring to claim 18, Reicher discloses wherein determining the set of findings comprises generating, from a predictive algorithm, a predictive measurement of tumor growth (para. 37, 74, 76, 78, 60, and 51 of Reicher).
(G) Referring to claim 19, Reicher discloses wherein the set of findings comprises normal findings, the method further comprising: using the set of models, determining a set of abnormal findings associated with the set of radiology images (para. 7 & 8 of Reicher); determining a second set of annotations based on the set of abnormal findings (para. 96 of Reicher); and displaying the second set of annotations, wherein each of the second set of annotations is displayed automatically upon display of an associated radiology image and throughout an entire duration of display of the associated radiology image (para. 71 and 96-98 of Reicher).
(H) Referring to claim 20, Reicher discloses further comprising toggling off display of an annotation of the second set of annotations in response to detecting a third action from the user (para. 155 of Reicher).
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reicher et al. (US 2016/0364862 A1) in view of Reicher et al. (US 10,127,662 B1, hereinafter “Reicher 2”), in view of Shreiber et al. (US 2010/0114597 A1), in view of Weeks (US 2015/0340036 A1), and further in view of Yanagida et al. (US 2015/0261915 A1)
(A) Referring to claim 13, Reicher, Reicher 2, Schreiber, and Weeks do not expressly disclose wherein the first action comprises hovering a cursor proximal to a voxel of a radiology image in the set of radiology images, wherein the voxel is associated with the annotation of the set of annotations, wherein the second action comprises at least one of: a mouse click or a hotkey press.
Yanagida discloses wherein the first action comprises hovering a cursor proximal to a voxel of a radiology image in the set of radiology images, wherein the voxel is associated with the annotation of the set of annotations, wherein the second action comprises at least one of: a mouse click or a hotkey press (para. 162 & 123 of Yanagida).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned features of Yanagida within Reicher, Reicher 2, Schreiber, and Weeks. The motivation for doing so would have been to identify an anatomical position (para. 163 of Yanagida).
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reicher et al. (US 2016/0364862 A1) in view of Reicher et al. (US 10,127,662 B1, hereinafter “Reicher 2”), in view of Shreiber et al. (US 2010/0114597 A1), in view of Weeks (US 2015/0340036 A1), and further in view of Rose (US 2007/0078679 A1).
(A) Referring to claim 21, Reicher, Reicher 2, Schreiber, and Weeks do not disclose wherein the radiology images are displayed in compliance with a patient privacy program.
Rose discloses wherein the radiology images are displayed in compliance with a patient privacy program (para. 48 of Rose).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to combine the aforementioned feature of Rose within Reicher, Reicher 2, Schreiber, and Weeks. The motivation for doing so would have been to maintain accurate record-keeping of patient information, as well as to ensure patient privacy (para. 48 of Rose).
Response to Arguments
Applicant’s arguments with respect to claim(s) 2 and 12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In addition, Applicant's arguments regarding the applied prior art fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. See 103 rejections above which show the portions of the Reicher reference that teach the newly added limitations.
Applicant's additional arguments filed 9/14/25 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response filed 9/14/25.
(1) Applicant respectfully submits that the amended claim language in independent claims 2 and 12 overcomes the alleged 35 U.S.C. § 101 rejection.
(A) As per the first argument, see 101 rejection above. Applicant’s arguments are not persuasive because the judicial exception is not integrated into a practical application. In particular, the computing system, workstation, PACS viewer, platform, voice recognition system, and display are recited at a high-level of generality (i.e., as generic computer components performing generic computer functions of receiving data, analyzing/determining data, displaying data, detecting actions, transmitting data, transforming data, integrating data, using a hotkey, and inserting data) such that it amounts no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Regarding the additional limitations directed to detecting actions and a computing system receiving images and displaying the images at a workstation, and transmitting findings to a platform, all of which the Examiner submits merely add insignificant extra-solution activity to the abstract idea or are claimed in a merely generic manner (e.g., at a high level of generality), the Examiner further submits that such steps are not unconventional as they merely consist of actions similar to a Web browser’s back and forward button functionality and receiving and transmitting data over a network and/or storing and retrieving information in memory. See MPEP 2106.05(d)(II). Applicant’s arguments regarding an improvement are not persuasive. For example, it is unclear how the language of the claims “prevents deterioration conditions captured in radiology images,” as argued.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LENA NAJARIAN whose telephone number is (571)272-7072. The examiner can normally be reached Monday - Friday 9:30 am-6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571)270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LENA NAJARIAN/Primary Examiner, Art Unit 3687