Prosecution Insights
Last updated: April 19, 2026
Application No. 18/653,763

SYSTEMS AND METHODS FOR DISPLAY OF MULTI-DATE AND MULTI-MODALITY IMAGES

Non-Final OA §103
Filed
May 02, 2024
Examiner
HAILU, TADESSE
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
GE Precision Healthcare LLC
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
82%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
747 granted / 960 resolved
+22.8% vs TC avg
Minimal +4% lift
Without
With
+4.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
989
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
41.1%
+1.1% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 960 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is in response to the application filed on 05/02/2024. 3. The IDS filed on 12/16/2025 is considered and entered into the application file. 4. Claims 1- 20 are pending. All the pending claims are examined and rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-5, 8-12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sperandio et al (US 20220318991 A1) in view of Sasidharan (US 20230051982 A1). As per claim 1, Sperandio discloses a computing device (system 100, Fig.1) comprising a display screen (display 136, Fig. 1) the computing device being configured to display on the display screen a plurality of image viewports within a multi-image graphical user interface (GUI), the plurality of image viewports displaying respective medical images of a patient, [0052] In the embodiment shown in FIG. 3, the visualization display area 304 includes four different windows that provide different images of the liver. In this example, the different windows present different series of a multiphase CT liver exam from the axial perspective), and additionally being configured to display within the GUI one or more findings viewports each displaying findings data corresponding to an image displayed within one of the plurality of image viewports, ([0051] the graphical user interface 300 further includes a right-side panel 306 that provides various controls and tools for evaluating observations depicted in the liver images. [0079] as illustrated in Fig. 9, each detected observation can be reported as finding, which in this example includes two (Finding 1 and Finding 2). The summary table 902 can include information identifying the observations detected, the particular image series where it was detected, its size (e.g., diameter, volume, etc.), its HCC score (which in this example includes a LI-RADS® score), and the imaging features detected. Sperandio discloses a retrieving patient information from a database, and wherein the respective medical images are comparable images of one or more imaging modalities ([0082] With reference to FIGS. 1 and 10, in accordance with method 1000, at 1002 multiphase liver exam data can be acquired in CT or MR. For example, the multiphase liver exam data (e.g., multiphase liver exam data 106) can be acquired directly from the imaging device 102 and/or from one or more medical data sources 104). But Sperandio does not teach “wherein the findings data of each respective medical image of the patient is obtained from a database while the database is in an un-launched state, Sasidharan, on the other hand, disclose [0050] At least a portion of the information included in the details panel 222 may be retrieved from/stored in the presentation system (e.g., in the digital twin), while other information included in the details panel 222 may be viewed from the original data source. For example, the details panel 222 may include links to the full pathology report and the images from the pathology report, which may be viewed from the pathology database, for example. Before dot 220 is selected, the details panel 222 may be in an un-launched state (e.g., not displayed) and the interface of the specific data source (e.g., an interface of the pathology database) may be in an un-launched state until the user selects the link in the details panel, for example. Both Sperandio and Sasidharan are directed to presenting patient information. Thus, before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the above teaching of Sasidharan with Sperandio to obtain the invention as specified in claim 1. The suggestion /motivation for doing so would have been to interacting with the medical interface window of the patient while medical observations/finding of the patient is obtained from a database while the database is in an un-launched state. As per claim 2, Sperandio in view of Sasidharan further discloses that the computing device of claim 1, wherein the GUI is displayed on the display screen in one of a grid configuration, a comparison configuration, and a horizontal configuration (Sperandio, see at least the GUI displayed in grid configurations in Figs. 3-9). As per claim 3, Sperandio in view of Sasidharan further discloses that the computing device of claim 2, wherein, in the grid configuration, the plurality of image viewports are arranged in a grid and images are displayed longitudinally according to time order, and the one or more findings viewports comprises a findings viewport corresponding to a selected one of the plurality of image viewports (Sperandio , see the timestamp (e.g., Oct. 20, 2020) in each medical image viewports shown in Fig. 3). As per claim 4, Sperandio in view of Sasidharan further discloses that the computing device of claim 2, wherein, in the comparison configuration, the plurality of image viewports comprises a first image viewport and a second image viewport and the one or more findings viewports comprise a first findings viewport that corresponds to the first image viewport and a second findings viewport that corresponds to the second image viewport, wherein findings data of a first image that is displayed within the first image viewport is displayed within the first findings viewport and findings data of a second image that is displayed within the second image viewport is displayed within the second findings viewport ([0051] FIG. 3 presents an example graphical user interface 300 that can be generated/rendered by the rendering component 112 in response to initial selection of a multiphase liver imaging study for a patient in association with usage of such a medical imaging application. The graphical user interface 300 includes an upper toolbar 302 with different tabs corresponding to different application tools/functions and a visualization area 304 that includes the rendered liver exam image data. The graphical user interface 300 further includes a right-side panel 306 that provides various controls and tools for evaluating observations depicted in the liver images. 0070] FIG. 4 presents an example graphical user interface 400 illustrating some features and functionalities of the observation evaluation tool 310. In the embodiment shown, the observation evaluation tool 310 has been selected to generate an evaluation window 402 including various functions and information that facilitate defining and evaluating observations included in the displayed images. These functions include and observation defining function 410, a scoring function 412, a feature detection function 414 and a validation function 416. Also see [0072, 0075, 0079]) also see Fig. 9). As per claim 5, Sperandio in view of Sasidharan further discloses that computing device of claim 2, wherein, in the horizontal configuration, the plurality of image viewports are arranged side by side along a first portion of the GUI and the one or more findings viewports are arranged side by side along a second portion of the GUI, wherein each of the findings viewports corresponds to and vertically aligns with one of the plurality of image viewports to display findings data of an image displayed within a corresponding image viewport ([0051] FIG. 3 presents an example graphical user interface 300 that can be generated/rendered by the rendering component 112 in response to initial selection of a multiphase liver imaging study for a patient in association with usage of such a medical imaging application. The graphical user interface 300 includes an upper toolbar 302 with different tabs corresponding to different application tools/functions and a visualization area 304 that includes the rendered liver exam image data. The graphical user interface 300 further includes a right-side panel 306 that provides various controls and tools for evaluating observations depicted in the liver images. In the embodiment shown, these tools include a phase identification tool 308 and an observation evaluation tool 310. Examiner’s note: For arrangements of viewports (in a visualization area 304) and observations/findings (in area 306) are shown in Figs. 3-9 ). As per claim 8, Sperandio in view of Sasidharan further discloses that the computing device of claim 1, wherein each findings viewport comprises one or more findings elements, each of the one or more findings elements displaying information of an object for a corresponding medical image ([0072] FIG. 5 presents an example graphical user interface 500 demonstrating some features and functionalities of the observation defining function 404. In the embodiment shown, the observation defining function 404 has been selected to reveal mark-up and contouring tools associated therewith that can be used to manually mark and define an observation in a displayed image. also see [0086], see one or more observations (finding) corresponding to medical images in Figs. 3-9). As per method claim 9, the claim recites similar limitations as in device claim 1, thus, it is rejected under similar citations given to the device claim 1. As per claim 10, Sperandio in view of Sasidharan further The method of claim 9, wherein the image viewports comprise a first image viewport corresponding to a first findings viewport and a second image viewport corresponding to a second findings viewport, wherein the first image viewport and the first findings viewport correspond to a first color and the second image viewport and the second findings viewport correspond to a second, different color (Sasidharan, [0054] FIG. 3 shows a second example timeline 300 that may be generated for a patient by presentation system 102. Further, the graphical elements, such as diamond 302, included to represent the different events, reports, records, etc., may be color coded and shape-coded. Associated with each graphical element is a summary, such as summary 304, of that event, record, report, and so on. [0103] FIG. 15A shows a first example table 1500 that illustrates the ordering relationship for the segment “chemotherapy” and a particular chemotherapy agent (e.g., Osimertinib) for a patient. The chemotherapy segment is specified as being a cancer treatment type of segment that is displayed using the color orange, though the color is for illustrative purposes and could be any suitable color). As per claim 11, Sperandio in view of Sasidharan further discloses that the method of claim 10, further comprising, when the selected GUI configuration is one of the grid configuration and the comparison configuration, modifying findings data displayed within the at least one findings viewport in response to user selection of one of a next element and a previous element, wherein the next element triggers display of data corresponding to a next later acquired image and the previous element triggers display of data corresponding to a next earlier acquired image (Sperandio, [0072] FIG. 5 presents an example graphical user interface 500 demonstrating some features and functionalities of the observation defining function 404. In the embodiment shown, the observation defining function 404 has been selected to reveal mark-up and contouring tools associated therewith that can be used to manually mark and define an observation in a displayed image. These tools are respectively included in a new observation defining window 502. In some embodiments, these tools can provide for editing the bounding box of an auto-detected and segmented lesion (e.g., as generated by the lesion detection component 118 using one or more lesion segmentation models). Also see [0029, 0068]). As per claim 12, Sperandio in view of Sasidharan further discloses the method of claim 11, wherein each of the respective image viewports are selectable and, when the selected GUI configuration is one of the grid configuration and the comparison configuration, selection of one of the next element and the previous element selects a corresponding image viewport that displays one of the next later acquired image and the next earlier acquired image, respectively. (Sperandio, [0052] In the embodiment shown in FIG. 3, the visualization display area 304 includes four different windows that provide different images of the liver. In this example, the different windows present different series of a multiphase CT liver exam from the axial perspective. In particular, the upper left window presents the unenhanced phase series, the upper right window presents the arterial phase series, the lower left window presents the portal venous series, and the lower right window presents the delayed series. In various embodiments, the entirety of the images include in each series can be independently viewed and scrolled in each of the four windows. For example, the initial image shown for each series can include a default selected image, such as the first image in each series, the middle image in each series, or the like. Examiner’s note, medica images in Figs. 3-9 are rendered in a grid configurations). As per claim 14, Sperandio in view of Sasidharan further discloses that the method of claim 10, further comprising, when the selected GUI configuration is the horizontal configuration, removing an image viewport and a corresponding findings viewport from the GUI in response to user selection of a hide element corresponding to the corresponding findings viewport (Sasidharan, [0081] At 1006, the method 1000 includes adding or removing one or more elements of the plurality of elements from the patient information timeline based on the user specialization. For example, information which is relevant to the selected specialization may be included on the patient information timeline and information which is not relevant may not be included on the patient information timeline. Examiner’s note, see images arranged in horizontal configuration, in Fig. 6). As per claim 15, Sperandio in view of Sasidharan further discloses that the method of claim 9, wherein each findings viewport comprises one or more findings elements each corresponding to an object identified within one or more of the one or more medical images. (Sperandio, [0029] The application can further automatically detect and characterize defined imaging features associated with an identified/extracted liver observation included in the image data using one or more feature detection algorithms In some embodiments, the application can also mark-up the image data (e.g., with highlighting, with color, with indica point to the feature, etc.) that depicts the detected imaging feature and provide information describing the detected imaging features (e.g., calculated imaging metrics for the detected features)). As per claim 16, Sperandio in view of Sasidharan further discloses that the method of claim 9, wherein the selected GUI configuration is one of a predefined default configuration, a user preference default configuration, or a configuration selected via user input (Sperandio, [0052] In the embodiment shown in FIG. 3, the visualization display area 304 includes four different windows that provide different images of the liver. In this example, the different windows present different series of a multiphase CT liver exam from the axial perspective. In particular, the upper left window presents the unenhanced phase series, the upper right window presents the arterial phase series, the lower left window presents the portal venous series, and the lower right window presents the delayed series. In various embodiments, the entirety of the images include in each series can be independently viewed and scrolled in each of the four windows. For example, the initial image shown for each series can include a default selected image, such as the first image in each series, the middle image in each series, or the like. Examiner’s note, a predefined default configuration is shown in Figs. 3-9). As per system claim 17, the claim recites similar limitations as in device claim 1, thus, it is rejected under similar citations given to the device claim 1. As per claim 18, Sperandio in view of Sasidharan further discloses that the system of claim 17, wherein each of the one or more findings elements are selectable elements that, when selected, highlight the corresponding object within each respective displayed medical image (Sperandio, [0029] The application can further automatically detect and characterize defined imaging features associated with an identified/extracted liver observation included in the image data using one or more feature detection algorithms In some embodiments, the application can also mark-up the image data (e.g., with highlighting, with color, with indica point to the feature, etc.) that depicts the detected imaging feature and provide information describing the detected imaging features (e.g., calculated imaging metrics for the detected features). As per claim 19, Sperandio in view of Sasidharan further discloses that the system of claim 17, wherein user selection displaying an image slice within a selected image viewport triggers display of comparable image slices within each of the other image viewports (Sperandio, [0060] In some implementations, the auto-contouring tool can require minimal manual input, wherein the reviewer can mark (e.g., place a graphical object) a portion of the detected observation (e.g., mark the region of interest (ROI)) and wherein the auto-contouring tool can estimate the remaining geometry of the observation based on the marked portion and image features associated with the marked portion. For example, the marked portion can include a line across the diameter of a detected observation, a circle or box placed imperfectly (loosely) around the detected observation, or the like. Additionally, or alternatively, the auto-contouring tool can allow the reviewer to more precisely outline the shape of the detected observation). As per claim 20, Sperandio in view of Sasidharan further discloses that the system of claim 17, wherein each of the one or more findings elements comprise one or more metrics, including percent change of the one or more metrics compared to a reference image (Sperandio, [0068] For example, the liver assessment data 130 can include but is not limited to: information identify the number of observations detected, information identify the size, location and geometry of the observations detected, information regarding the imaging features detected and their corresponding metrics, and the determined HCC score for the respective observations). 6. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Sperandio et al (US 20220318991 A1) in view of Sasidharan (US 20230051982 A1) further in view of Zhao et al ( US 20140143710 A1). As per claim 6, Sperandio in view of Sasidharan fails to teach toggling between the grid configuration, comparison configuration, and horizontal configuration. Zhao, on the other hand, discloses toggling or switching from one medial images layout to another medial images layout. For example [0074] As shown in FIG. 3a, the user interface 300 displays series in all of the image windows 308-314. In other examples, the user may change or modify presentation parameters associated with the hanging protocol (e.g., a user may change the layout from a 2.times.2 grid to a 3.times.2 grid) and/or the individual image windows (e.g., a user may change the level of zoom in an image window). Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the above teaching of Zhao with Sperandio in view of Sasidharan to obtain the invention as specified in claim 6. The suggestion /motivation for doing so would have been to enable a user to easily and quickly change (e.g., modify) and save presentation parameters (e.g., a window level, a zoom level, a page format, a grid layout, etc.) of an existing hanging protocol. In some examples, an existing default or custom hanging protocol configuration may display multiple images and/or series on a user display and the user may desire to, for example, adjust the spatial layout or arrangement of the images and/or series (see Zhao, [0059]. 7. Claims 7 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Sperandio et al (US 20220318991 A1) in view of Sasidharan (US 20230051982 A1) further in view of Ehlke et al (US 20110060766 A1). As per claim 7, Sperandio in view of Sasidharan fails to disclose a pin element and a group element as recited in claim 7, that is, Sperandio in view of Sasidharan fails to disclose the computing device of claim 2, wherein, when in the comparison configuration, the GUI comprises a pin element that when selected for a given displayed image, pins the given displayed image to the GUI, and a grouping element that when selected for a displayed image triggers display of a pop-up menu listing available images with which to group the selected displayed image. Ehlke, on the other hand, discloses [0028] grouping file icons, wherein manipulation of the case package icons associated with the files and slides via the user input includes categorizing the case packages to define piles by one of selecting and grouping file icons (see claim 17); and [0170] Clicking a pin icon 1720 pins down tools popup window 1316 on digital microscope viewing window 1310. In this way, tools popup window 1316 becomes a toolbox. Clicking pin icon 1720 again causes tools popup window 1316 to disappear. Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the teaching of Ehlke with Sperandio in view of Sasidharan to obtain the invention as specified in claim 7. The suggestion /motivation for doing so would have been to provide a quick and easier interactive tool (icon/pin) to the user of medical professionals to interact with the medical images. As per claim 13, Sperandio in view of Sasidharan and further in view of Ehlke method of claim 11, further comprising, when the selected GUI configuration is the comparison configuration, pinning an image to the GUI in response to user selection of a pin element corresponding to the image, wherein, when one or both of the next element and the previous element are selected when the image is pinned, the pinned image is not removed from the display (Ehlke, [0211] The system and methods as disclosed thus enable organized presentation for comparison of images and data at successive stages in the progress of the patient over time. The information may be partly archived but remains conveniently organized and accessible by accessing the database. Instead of handling slides, files and paper copies of documents and images, the user can handle a given case package, and also review any previously processed case packages of the same patient, whether or not reviewed by the same user, to compare the case packages and monitor changes occurring in sequence over a period of time). Conclusion 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TADESSE HAILU/ Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

May 02, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596435
CONTACT OR CONTACTLESS INTERFACE WITH TEMPERATURE HAPTIC FEEDBACK
2y 5m to grant Granted Apr 07, 2026
Patent 12578976
SYSTEMS AND METHODS FOR AFFINITY-DRIVEN INTERFACE GENERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12578849
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM FOR PAGE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12572198
USER INTERFACES FOR GAZE TRACKING ENROLLMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12566621
CUSTOMIZATION AND ENRICHMENT OF USER INTERFACES USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
82%
With Interview (+4.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 960 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month