Prosecution Insights
Last updated: April 19, 2026
Application No. 18/200,715

MEDICAL IMAGE DISPLAY SYSTEM, MEDICAL IMAGE DISPLAY METHOD, AND RECORDING MEDIUM

Final Rejection §103
Filed
May 23, 2023
Examiner
ABDI, AMARA
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Panasonic Holdings Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
76%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
677 granted / 816 resolved
+21.0% vs TC avg
Minimal -8% lift
Without
With
+-7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
849
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s response to the last office action, filed September 18, 2025 has been entered and made of record. Claim 4 has been amended; and claim 15 has been newly added. Claims 1-15 are pending in this application. Response to Arguments Applicant's arguments filed September 18, 2025 have been fully considered but they are not persuasive. -- Applicant asserted on Page 7, 1st paragraph, that Ruppertshofen fails to disclose changing “in response to an input received for one of (i) the at least one annotation and (ii) the at least one structure label from a user, a display form of another of (i) the at least one annotation and (11) the at least one structure label,” as required by the above-noted features of claim 1. The Examiner respectfully disagrees, because Ruppertshofen clearly discloses in Par. 0035, that the user can update/edit the finding description in the image window 604 or the reporting environment in the report creation/editing window 602 directly, [i.e., changing the display in response to a user input]. Ruppertshofen further discloses in Fig. 7, and Par. 0049-0050, that report/image linking module 122 further includes a report/image updater 714, which the report/image updater 714 adds the report hyperlink to the report, retrieves the linked image, which can be displayed; and the report hyperlink can be invoked via an input through an input device(s) 110, and by clicking onto the hyperlink in the report, the report repository(s) 120 shows the image at the slice corresponding to the annotation, [i.e., implicitly updating or changing the display in response to the user input, to show an image at the slice corresponding to the annotation, implicitly selected by the user based on clicking on the hyperlink]). -- Applicant further discloses on Page 7, that although Pathak discloses changing a display form for an image of a liver by changing the display form among the plural display forms, Applicant notes that Pathak fails to teach that the display form of the image of the liver is changed in response to an input. The Examiner respectfully disagrees because the primary prior art reference to Ruppertshofen discloses already that the display form of the image of the liver is changed in response to an input, as stated above, (see at least: Par. 0035, 0049-0050, 0063). Furthermore, the secondary prior art reference to Pathak discloses in Par. 0015-0016, that the clinician can type in the word `liver` or select the word `liver`, [i.e., annotation or label”], from a drop-down menu or other listing of the semantically labeled anatomical structures, which can cause the clinician's desired anatomical structure to be displayed as evidenced in FIG. 2; and the GUI 200 presents a relatively more detailed coronal or front view of the patient's liver, as indicated at 202, “displaying at least one annotation of anatomical structure: liver”, [i.e., the display form of the image is implicitly changed in response to the user input selecting the word `liver’]). Further, from Fig. 4, and Par. 0031, this GUI provides the user the opportunity to select views at 402, from coronal, sagittal, and axial views as indicated at 404, 406, and 408 respectively, “displaying the at least one structure label”, [i.e., implicitly changing the display form of another of (i) the at least one annotation, and (ii) the at least one structure label, “the display form of Figs. 2 and 4 represent an alternative display form, for displaying at least one annotation of anatomical structure, such as a liver, in response to the user input; and at least one of coronal, sagittal, and axial views as indicated by labels 404, 406, and 408, in response to the clinician's initial entries”]). For the reasons stated, the rejection of claims 1, and 13 and their dependent claims was proper, and it is maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ruppertshofen et al, (US-PGPUB 20170337328) in view of Pathak et al, (US-PGPUB 20120166462) In regards to claim 1, Ruppertshofen discloses a medical image display system that displays a medical image, (see at least: Fig. 1, “output device 112”; and Figs. 2-5, and Par. 0021, the report editor 202 visually presents, via a display output device 112, a graphical user interface with a report creation/editing window, “i.e., medical image display system”), the medical image including: a first display area for displaying a processed image in which at least one annotation indicating a result of detection of at least one anatomical structure shown in a captured image of a subject is superimposed on the captured image, (see at least: Figs. 2-5, and Par. 0032, the anatomy identifier 208 identifies the anatomy from textual, numerical, graphical, etc. image overlay, using an algorithm from anatomy identifying algorithm(s) 210. Further, Par. 0035, Fig. 6 shows an example of a report creation/ editing window 602 and a concurrently displayed image(s) window 604 , where the image(s) window 604 includes a plurality of images with corresponding information; and from Fig. 7, and Par. 0036, findings in an image, are marked by annotations, [i.e., a first display area, “604 in FIG. 6”, for displaying a processed image, “the identified anatomy”, in which at least one annotation indicating a result of detection of at least one anatomical structure shown in a captured image of a subject is superimposed on the captured image, “findings in an image, are marked by annotations on the identified anatomy”]); and a second display area for displaying at least one structure label for identifying the at least one anatomical structure, (see at least: Fig. 6, and Par. 0034-0035, retrieved image(s) is then associated with the description determined by the image description extractor 206, where the image retriever 212 visually presents the retrieved image(s) in an image window in the graphical user interface visually presented through the display 112, such that the image window is presented alongside the report creation/editing window 602 with a graphical user interface, which the report creation/editing window 602 corresponds to the second display area for displaying the at least one image description, corresponding to at least one structure label for identifying the at least one anatomical structure using the graphical user interface; and from Par. 0039, the anatomy labeler 704 processes an image to identify the tissue (e.g., heart) therein and the location of the tissue, [i.e., second display area, “602 in Fig. 6”, for displaying at least one structure label for identifying the at least one anatomical structure, “displaying the at least one image description and labels for identifying the at least one anatomical structure”]); the medical image display system, (112 in Fig. 1), comprising: a display that displays the medical image, (see at least: Par. 0034, image retriever 212 visually presents the retrieved image(s) in an image window in the graphical user interface visually presented through the display 112); and a controller that changes a display form, in response to an input received from a user, for one of (i) the at least one annotation and (ii) the at least one structure label from a user, (see at least: Par. 0034, the image retriever 212 visually presents the retrieved image(s) in an image window in the graphical user interface visually presented through the display 112, alongside the report creation/editing window; and from Par. 0035, the user can update/edit the finding description in the image window 604 or the reporting environment in the report creation/editing window 602 directly, [i.e., implicitly changing a display, “update/edit the finding description in the image window 604”, in response to an input received from a user, “implicit by the graphical user interface”]. Further, in Fig. 7, and Par. 0049-0050, report/image linking module 122 further includes a report/image updater 714 for adding the report hyperlink to the report, retrieves the linked image, which can be displayed; and the report hyperlink can be invoked via an input through an input device(s) 110, and by clicking onto the hyperlink in the report, the report repository(s) 120 shows the image at the slice corresponding to the annotation, [i.e., a controller changes a display form, “updating or changing the display to show an image at the slice corresponding to the annotation”, in response to an input received for the at least one annotation from a user, “in response to the user clicking on the hyperlink, to implicitly select the annotation, (the at least one annotation”, relative to the image to be displayed”]). Ruppertshofen does not expressly disclose changing the display form of another of (i) the at least one annotation and (ii) the at least one structure label, in response to the user. However, Pathak et al discloses changing a display form of another of (i) the at least one annotation and (ii) the at least one structure label, in response to the user input, (see at least: Figs 2-4, and Par. 0015-0016, the clinician can type in the word `liver` or select the word `liver`, [i.e., annotation or label”], from a drop-down menu or other listing of the semantically labeled anatomical structures, which can cause the clinician's desired anatomical structure to be displayed as evidenced in FIG. 2; and the GUI 200 presents a relatively more detailed coronal or front view of the patient's liver, as indicated at 202, “at least one annotation of anatomical structure: liver”; and the clinician can enter `liver` in the anatomical structure selection field 110, “the at least one structure label”. Also, from Fig. 4, and Par. 0031, This GUI provides the user the opportunity to select views at 402, from coronal, sagittal, and axial views as indicated at 404, 406, and 408 respectively, “the at least one structure label”, [i.e., implicitly changing the display form of another of (i) the at least one annotation, and (ii) the at least one structure label, in response to the user input, “the display form of Figs. 2 and 4 represent an alternative display form, for displaying at least one annotation of anatomical structure, such as a liver, in response to the user input; and at least one of coronal, sagittal, and axial views as indicated by labels 404, 406, and 408, in response to the clinician's initial entries”]). Ruppertshofen and Pathak are combinable because they are both concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Ruppertshofen, to use another display form, as though by Pathak, to present a relatively more detailed views of the patient's anatomy structure, (Pathak, Par. 0016), to thereby provide meaningful patient image data to a user, such as a clinician, (Pathak, Par. 0009). In regards to claim 2, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. Ruppertshofen further discloses wherein in response to the input of selection of a first annotation among the at least one annotation, the controller causes a measurement result of an anatomical structure corresponding to the first annotation to be additionally displayed in a structure label corresponding to the first annotation, (Ruppertshofen, see at least: Fig. 4, and Par. 0032, a measurement value “54.1 mm” is additionally extracted from the image. Also, Par. 0036, the report/image linking module 122 generates a link between findings in an image, which are marked by annotations, and the corresponding keyword in the report, where these annotations can be measurements, “i.e., implicitly displaying measurement result of an anatomical structure corresponding to the first annotation”]). In regards to claim 3, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. Pathak further discloses wherein in response to the input of selection of a first structure label among the at least one structure label, the controller causes the display to switch between displaying and not displaying an annotation corresponding to the first structure label, (Par. 0002, and 0013-0014, the graphical user-interface (GUI) from image data includes multiple semantically-labeled user-selectable anatomical structures, and an anatomical structure selection field (or command window) 110, which enables the user to select an anatomical structure among different anatomical annotations, and causing the display to switch between the different anatomical structures) In regards to claim 4, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. Pathak further discloses wherein the medical image further includes a third display area for displaying a comparison image to be compared with the processed image, (see at least: Fig. 3, and Par. 0027-0028, displaying a historical view 302, “third display area”, from the patient; and a view 304 from the recent patient scanning session, to be compared with the historical view 302, [i.e., displaying a comparison image, “view 304 from the recent patient scanning session”, to be compared with the processed image, “historical view 302”]); the comparison image is captured before the detection of the at least one anatomical structure is performed; (see at least: Fig. 3, and Par. 0027-0028, the view 304 from the recent patient scanning session, is implicitly captured before the detection of the at least one anatomical structure is performed). In regards to claim 5, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. Pathak further discloses wherein the medical image further includes a fourth display area for displaying a result of analysis performed on an abnormality in the at least one anatomical structure, (see at least: Par. 0028-0030, performing the registration process to identify the corresponding historical 302 and population average 306 views that match the recent view 304, where the registration can allow more accurate tracking of disease progression, … by comparing the change in size and status of the lesions over time, [i.e., a fourth display area, “historical 302 and/or the population view 306 areas”, for displaying a result of analysis performed on an abnormality in the at least one anatomical structure, “displaying the historical 302 and the population average view 306 that match the present view 306”]). In regards to claim 6, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. Pathak further discloses wherein in response to the input received for the one of (i) the at least one annotation and (ii) the at least one structure label, the controller causes the result of analysis performed on an anatomical structure identified from among the at least one anatomical structure based on the input to be displayed in the fourth display area, (see at least: Par. 0002, and 0013-0014, the graphical user-interface (GUI) from image data includes multiple semantically-labeled user-selectable anatomical structures, and an anatomical structure selection field (or command window) 110, which enables the user to select an anatomical structure among different anatomical annotations, based on using the anatomical structure selection field 110, “fourth display”). Regarding claim 13, claim 13 recites substantially similar limitations as set forth in claim 1. As such, claim 13 is rejected for at least similar rational. The Examiner further acknowledged the following additional limitation(s): “a medical image display method of displaying a medical image”. However, Ruppertshofen discloses the “medical image display method of displaying a medical image”, (see at least: Fig. 8, and Par. 0055-0064). Regarding claim 14, claim 14 recites substantially similar limitations as set forth in claim 13. As such, claim 14 is rejected for at least similar rational. The Examiner further acknowledged the following additional limitation(s): “a non-transitory computer-readable recording medium having recorded thereon a program for causing a computer to execute the medical image display method according to claim 13’. However, Ruppertshofen discloses the “non-transitory computer-readable recording medium having recorded thereon a program for causing a computer to execute …”, (see at least: Par. 0064, “computer readable instructions, encoded or embedded on computer readable storage medium”). Claims 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Ruppertshofen and Pathak et al, as applied to claim 1; and further in view of Bengtsson et al, (US-PGPUB 20210401392) In regards to claim 7, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. Pathak further discloses wherein the at least one annotation includes (i) a segmentation, (see at least: Par. 0037, the automated segmentation and/or annotated measurements of patient image data, using the bounding boxes as a sub-region of interest and get a fully automated segmentation of image regions) The combine teaching Ruppertshofen and Pathak as whole does not expressly disclose that the segmentation indicating an abnormal structure among the at least one anatomical structure and (ii) the segmentation indicating a normal structure among the at least one anatomical structure, and the segmentation indicating the abnormal structure is light-transmissive. However, Bengtsson discloses that the segmentation indicating an abnormal structure among the at least one anatomical structure and (ii) the segmentation indicating a normal structure among the at least one anatomical structure, (see at least: Par. 0055, image segmentation techniques such as thresholding, region growing, fuzzy clustering, use of the watershed algorithm, etc., have been used for separating abnormal tissues (e.g., tumor masses) from normal tissues, [i.e., the segmentation indicates normal and abnormal tissues]). Furthermore, the segmentation indicating the abnormal structure is light-transmissive, is well known in the art, such segmenting the tissue using the fluorescent and basal cell markers. Ruppertshofen, Pathak, and Bengtsson are combinable because they are all concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ruppertshofen and Pathak, to use one of the image segmentation techniques, as though by Bengtsson, in order to separate the abnormal tissues (e.g., tumor masses) from normal tissues, (Bengtsson, Par. 0055) The prior art made of record Shipitsin et al, (US-PGPUB 20160266126) not relied upon is considered pertinent to some aspects of claim 7, as follow: -- Shipitsin et al, (US-PGPUB 20160266126) discloses the segmentation Indicative of anatomical structure is light-transmissive, (Par. 0493, tissue samples were segmented using the fluorescent epithelial and basal cell markers, “i.e., the fluorescent is implicitly based light reflection). In regards to claim 9, the combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. The combine teaching Ruppertshofen and Pathak as whole does not expressly disclose wherein the at least one annotation includes (i) a segmentation indicating an abnormal structure among the at least one anatomical structure and (ii) a segmentation indicating a normal structure among the at least one anatomical structure, and the segmentation indicating the normal structure and a structure label corresponding to the segmentation are displayed in a first display color, and the segmentation indicating the abnormal structure and a structure label corresponding to the segmentation are displayed in a second display color different from the first display color. However, Bengtsson discloses wherein the at least one annotation includes (i) a segmentation indicating an abnormal structure among the at least one anatomical structure and (ii) a segmentation indicating a normal structure among the at least one anatomical structure, (see at least: Par. 0055, image segmentation techniques such as thresholding, region growing, fuzzy clustering, use of the watershed algorithm, etc., have been used for separating abnormal tissues (e.g., tumor masses) from normal tissues), [i.e., (i) a segmentation indicating an abnormal structure among the at least one anatomical structure and (ii) a segmentation indicating a normal structure among the at least one anatomical structure]. Further, Par. 0080, in semantic segmentation, the CNN models 215 identify the location and shapes of different objects (e.g., tumor tissue and normal tissue) in an image by classifying each pixel with desired labels. For example, tumor tissue are labeled tumor and are colored red, normal tissue are labeled normal and are colored green, and background pixels are labeled background and are colored black, [i.e., implicitly displaying the segmentation indicating the normal structure and a structure label corresponding to the segmentation are displayed in a first display color, “tumor tissue are labeled tumor and are colored red”, and displaying the segmentation indicating the abnormal structure and a structure label corresponding to the segmentation are displayed in a second display color different from the first display color, “normal tissue are labeled normal and are colored green”]). Ruppertshofen, Pathak, and Bengtsson are combinable because they are all concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ruppertshofen and Pathak, to use the CNN models 215, as though by Bengtsson, in order to classify each pixel with desired labels, where the tumor tissue are labeled tumor and are colored red, and the normal tissue are labeled normal and are colored green, (Bengtsson, Par. 0080). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Ruppertshofen, Pathak, and Bengtsson, as applied to claim 7 above; and further in view of Krishnan, (US-PGPUB 20050102315) The combine teaching Ruppertshofen, Pathak, and Bengtsson as whole discloses the limitations of claim 7. The combine teaching Ruppertshofen, Pathak, and Bengtsson as whole does not expressly disclose wherein in response to the input received for one of the segmentation indicating the abnormal structure and a structure label corresponding to the segmentation, the controller causes a level of the light-transmittance of the segmentation to be changed. However, Krishnan discloses wherein in response to the input received for one of the segmentation indicating the abnormal structure and a structure label corresponding to the segmentation, the controller causes a level of the light-transmittance of the segmentation to be changed, (Par. 0026, a user manually mark (user marks) regions of interest in one or more locations of a subject image dataset that is rendered and displayed to the user; and from Par. 0029, segmentation module (17-1) implements one or more methods for segmenting features or anatomies of interest by reference to known or anticipated image characteristics, such as changes or transitions in colors or intensities, changes or transitions in spectrographic information, [i.e., in response to the input received for one of the segmentation indicating the abnormal structure and a structure label corresponding to the segmentation, “in response to the user manually mark (user marks) regions of interest”, the controller causes a level of the light-transmittance of the segmentation to be changed, “change or transition the tissue’s colors or intensities”]). Ruppertshofen, Pathak, Bengtsson, and Krishnan are combinable because they are all concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ruppertshofen, Pathak, and Bengtsson, to use the segmentation module, as though by Krishnan, in order to change or transition the tissue’s colors or intensities, (Krishnan, Par. 0029). Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Ruppertshofen, Pathak et al, and Bengtsson et al, as applied to claim 9 above; and further in view of Walen et al, (US-PGPUB 20220338938) The combine teaching Ruppertshofen, Pathak, and Bengtsson as whole discloses the limitations of claim 1. The combine teaching Ruppertshofen, Pathak, and Bengtsson as whole does not expressly disclose wherein in response to the input regarding one of (i) the segmentation indicating the abnormal structure and the segmentation indicating the normal structure and (ii) the at least one structure label, the controller causes at least one of the first display color or the second display color to be changed. However, Walen discloses wherein in response to the input regarding one of (i) the segmentation indicating the abnormal structure and the segmentation indicating the normal structure and (ii) the at least one structure label, the controller causes at least one of the first display color or the second display color to be changed, (see at least: Par. 0170-0171, graphical user interface (GUI) 150C may also comprise one or more labels 174A, 174B identifying the anatomical structure displayed on the graphical user interface (GUI) 150C, “i.e., input the at least one structure label”; … the graphical user interface (GUI) 150C may also comprise alert indicators 172 166 (not shown) positioned within the display of anatomical feature relative to the various virtual boundaries (Boundary 1, 2, 8, 9) and/or the alert zone (Zone 1); and from Par. 0075, navigation display 120 may be utilized as the display for the first alert device 255 such that the navigation display 120 is configured to flash and/or change color when the first alert device is triggered, “i.e., the controller causes at least one of the first display color or the second display color to be changed, in response to user’s input more labels 174A, 174B”). Ruppertshofen, Pathak, Bengtsson, and Walen are combinable because they are all concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ruppertshofen, Pathak, and Bengtsson, to use the navigation display 120, as the display for the first alert device 255, as though by Walen, in order to flash and/or change color when the first alert device is triggered, (Walen, Par. 0075) Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Ruppertshofen, Pathak et al, and Bengtsson et al, as applied to claim 9 above; and further in view of Kamoda et al, (US-PGPUB 20190197684) The combine teaching Ruppertshofen, Pathak, and Bengtsson as whole discloses the limitations of claim 1. Bengtsson further discloses wherein the at least one annotation further includes a segmentation indicating a reference structure which is used as a structural reference of a corresponding one of the at least one anatomical structure in the processed image, (see at least: Par. 0086, 0102, component detection model automatically assesses the location of components (e.g., the liver and lungs) of the region or body captured in the two-dimensional segmentation mask as reference points, and uses the reference points to split the region or body into the multiple anatomical regions, [i.e., segmentation indicating a reference structure, “two-dimensional segmentation mask indicating reference points”, which is used as a structural reference of a corresponding one of the at least one anatomical structure in the processed image, “the one or more components (e.g., the liver and lungs) are used as reference points for the multiple anatomical regions”]). The combine teaching Ruppertshofen, Pathak, and Bengtsson as whole does not expressly disclose displaying the segmentation indicating the reference structure in an achromatic color. Kamoda discloses displaying the segmentation indicating the reference structure in an achromatic color, (see at least: Par. 0085, in order to make it possible to distinguish between a normal tissue and a lesion tissue in the tissue distribution table 75 and the target image 17A, an achromatic color is assigned as the display color of the normal tissue and a chromatic color is assigned as the display color of the lesion tissue, [i.e., the normal tissue corresponds to the reference structure]). Ruppertshofen, Pathak, Bengtsson, and Kamoda are combinable because they are all concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ruppertshofen, Pathak, and Bengtsson, to assign achromatic color as the display color of the normal tissue, as though by Kamoda, in order to make it possible to distinguish between a normal tissue and a lesion tissue, (Kamoda, Par. 0085) Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ruppertshofen and Pathak et al, as applied to claim 1; and further in view of Azizian (US-PGPUB 20230225804, “based on US-Prov. Appl. 63/046,278, filed on 01/30/2020”) The combine teaching Ruppertshofen and Pathak as whole discloses the limitations of claim 1. The combine teaching Ruppertshofen and Pathak as whole does not expressly disclose wherein the display form includes: a first display mode in which each of the at least one annotation blinks; and a second display mode in which a second annotation corresponding to the input among the at least one annotation blinks and an annotation other than the second annotation is not displayed, and in response to the input of selection of the second annotation among the at least one annotation in the first display mode, the controller causes the first display mode to be switched to the second display mode. Azizian discloses wherein the display form includes: a first display mode in which each of the at least one annotation blinks; and a second display mode in which a second annotation corresponding to the input among the at least one annotation blinks and an annotation other than the second annotation is not displayed, and in response to the input of selection of the second annotation among the at least one annotation in the first display mode, the controller causes the first display mode to be switched to the second display mode, (see at least: Par. 0165, System 100 may also be configured to allow a user to control the display of graphical tag elements, e.g., turn ON and/or turn OFF the presentation of graphical tag elements representative of an event that occurs within the region of interest, “i.e., turning ON and/or turn OFF the presentation of graphical tag elements implicit the first display mode in which each of the at least one annotation blinks; and a second display mode in which a second annotation corresponding to the input among the at least one annotation blinks and an annotation other than the second annotation is not displayed”. Further, as shown in FIG. 12, GUI 1200 includes a global switch 1226 by which a user can turn ON and/or turn OFF the display of all graphical tags 1208 within the image presented in main presentation window 1202, [i.e., the controller causes the first display mode to be switched to the second display mode, “turning ON”, in response to the input of selection of the second annotation among the at least one annotation in the first display mode, “in response to the presentation of graphical tag elements representative of an event that occurs within the region of interest being turned “ON”]). Ruppertshofen, Pathak, and Azizian are combinable because they are all concerned with anatomical structures identification. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Ruppertshofen and Pathak, to allow a user to control the display of graphical tag elements, as though by Azizian, in order to turn ON and/or turn OFF the presentation of graphical tag elements representative of an event that occurs within the region of interest, using global switch 1226, (Azizian, Par. 0165). Allowable Subject Matter Claim 15 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. With respect to claim 15, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole): “in response to an input from the user to one annotation among the at least one annotation superimposed on the captured image, the controller applies one structure label corresponding to the one annotation and is different from another structure label, among the plurality of structure labels”. The relevant prior art of record, Ruppertshofen et al, (US-PGPUB 20170337328) discloses a medical image display system that displays a medical image, (see at least: Fig. 1, “output device 112”; and Figs. 2-5, and Par. 0021, the report editor 202 visually presents, via a display output device 112, a graphical user interface with a report creation/editing window, “i.e., medical image display system”), the medical image including: a first display area for displaying a processed image in which at least one annotation indicating a result of detection of at least one anatomical structure shown in a captured image of a subject is superimposed on the captured image, (see at least: Figs. 2-5, and Par. 0032, the anatomy identifier 208 identifies the anatomy from textual, numerical, graphical, etc. image overlay, using an algorithm from anatomy identifying algorithm(s) 210. Further, Par. 0035, Fig. 6 shows an example of a report creation/ editing window 602 and a concurrently displayed image(s) window 604 , where the image(s) window 604 includes a plurality of images with corresponding information; and from Fig. 7, and Par. 0036, findings in an image, are marked by annotations, [i.e., a first display area, “604 in FIG. 6”, for displaying a processed image, “the identified anatomy”, in which at least one annotation indicating a result of detection of at least one anatomical structure shown in a captured image of a subject is superimposed on the captured image, “findings in an image, are marked by annotations on the identified anatomy”]); and a second display area for displaying at least one structure label for identifying the at least one anatomical structure, (see at least: Fig. 6, and Par. 0034-0035, retrieved image(s) is then associated with the description determined by the image description extractor 206, where the image retriever 212 visually presents the retrieved image(s) in an image window in the graphical user interface visually presented through the display 112, such that the image window is presented alongside the report creation/editing window 602 with a graphical user interface, which the report creation/editing window 602 corresponds to the second display area for displaying the at least one image description, corresponding to at least one structure label for identifying the at least one anatomical structure using the graphical user interface; and from Par. 0039, the anatomy labeler 704 processes an image to identify the tissue (e.g., heart) therein and the location of the tissue, [i.e., second display area, “602 in Fig. 6”, for displaying at least one structure label for identifying the at least one anatomical structure, “displaying the at least one image description and labels for identifying the at least one anatomical structure”]); the medical image display system, (112 in Fig. 1), comprising: a display that displays the medical image, (see at least: Par. 0034, image retriever 212 visually presents the retrieved image(s) in an image window in the graphical user interface visually presented through the display 112); and a controller that changes a display form, in response to an input received from a user, for one of (i) the at least one annotation and (ii) the at least one structure label from a user, (see at least: Par. 0034, the image retriever 212 visually presents the retrieved image(s) in an image window in the graphical user interface visually presented through the display 112, alongside the report creation/editing window; and from Par. 0035, the user can update/edit the finding description in the image window 604 or the reporting environment in the report creation/editing window 602 directly, [i.e., implicitly changing a display, “update/edit the finding description in the image window 604”, in response to an input received from a user, “implicit by the graphical user interface”]. Further, in Fig. 7, and Par. 0049-0050, report/image linking module 122 further includes a report/image updater 714 for adding the report hyperlink to the report, retrieves the linked image, which can be displayed; and the report hyperlink can be invoked via an input through an input device(s) 110, and by clicking onto the hyperlink in the report, the report repository(s) 120 shows the image at the slice corresponding to the annotation, [i.e., a controller changes a display form, “updating or changing the display to show an image at the slice corresponding to the annotation”, in response to an input received for the at least one annotation from a user, “in response to the user clicking on the hyperlink, to implicitly select the annotation, (the at least one annotation”, relative to the image to be displayed”]). However, Ruppertshofen fails to teach or suggest, either alone or in combination with the other cited references, in response to an input from the user to one annotation among the at least one annotation superimposed on the captured image, the controller applies one structure label corresponding to the one annotation and is different from another structure label, among the plurality of structure labels. A further prior art of record, Pathak et al, (US-PGPUB 20120166462), discloses changing a display form of another of (i) the at least one annotation and (ii) the at least one structure label, in response to the user input, (see at least: Figs 2-4, and Par. 0015-0016, the clinician can type in the word `liver` or select the word `liver`, [i.e., annotation or label”], from a drop-down menu or other listing of the semantically labeled anatomical structures, which can cause the clinician's desired anatomical structure to be displayed as evidenced in FIG. 2; and the GUI 200 presents a relatively more detailed coronal or front view of the patient's liver, as indicated at 202, “at least one annotation of anatomical structure: liver”; and the clinician can enter `liver` in the anatomical structure selection field 110, “the at least one structure label”. Also, from Fig. 4, and Par. 0031, This GUI provides the user the opportunity to select views at 402, from coronal, sagittal, and axial views as indicated at 404, 406, and 408 respectively, “the at least one structure label”, [i.e., implicitly changing the display form of another of (i) the at least one annotation, and (ii) the at least one structure label, in response to the user input, “the display form of Figs. 2 and 4 represent an alternative display form, for displaying at least one annotation of anatomical structure, such as a liver, in response to the user input; and at least one of coronal, sagittal, and axial views as indicated by labels 404, 406, and 408, in response to the clinician's initial entries”]). Pathak further discloses wherein the at least one structure label includes a plurality of structure labels, (see at least: Par. 0013, the anatomical structures of the thorax can be labeled with the semantic labels, where the "liver", the "left lung" and the "right lung" are each labeled, [i.e., the at least one structure label, “thorax”, includes a plurality of structure labels, (“liver, the left lung, and the "right lung”)]). However, Pathak fails to teach or suggest, either alone or in combination with the other cited references, in response to an input from the user to one annotation among the at least one annotation superimposed on the captured image, the controller applies one structure label corresponding to the one annotation and is different from another structure label, among the plurality of structure labels. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMARA ABDI/Primary Examiner, Art Unit 2668 12/09/2025
Read full office action

Prosecution Timeline

May 23, 2023
Application Filed
Jun 14, 2025
Non-Final Rejection — §103
Sep 02, 2025
Interview Requested
Sep 10, 2025
Examiner Interview Summary
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 18, 2025
Response Filed
Dec 09, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602822
METHOD DEVICE AND STORAGE MEDIUM FOR BACK-END OPTIMIZATION OF SIMULTANEOUS LOCALIZATION AND MAPPING
2y 5m to grant Granted Apr 14, 2026
Patent 12597252
METHOD OF TRACKING OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12576595
SYSTEMS AND METHODS FOR IMPROVED VOLUMETRIC ADDITIVE MANUFACTURING
2y 5m to grant Granted Mar 17, 2026
Patent 12574469
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12563154
VIDEO SURVEILLANCE SYSTEM, VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND VIDEO PROCESSING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
76%
With Interview (-7.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month