Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is in response to applicant’s amendment/response filed on02/18/2026, which has
been entered and made of record. Claim 1-9 and 11-19 are amended. Claim 20 is canceled. Claims 1-19 are pending in the application.
Response to Arguments
Applicant arguments regarding claim rejections under 103 are considered, but are not persuasive.
Applicant argues:
PNG
media_image1.png
216
592
media_image1.png
Greyscale
Examiner disagrees: Jacob teaches first color and second color rendering parameters. ([0142]-[0143], “FIG. 7 shows an example of a tumor that is in contact with an angled blood vessel. In the top part marked A, a blood vessel 310 is shown in contact with a tumor 320. Slicing directions are indicated by dashed lines, and resulting slices in left-bottom part B and in right-bottom part C. Part A shows 3D volumetric data representing an angled blood vessel (dark grey) in contact with a tumor (white).”)
Applicant argues:
PNG
media_image2.png
330
592
media_image2.png
Greyscale
Examiner disagrees: Jacob teaches different transparency for an organ and a tumor part to help users identify tumor. ([0181], “The pancreas is made transparent or partially transparent, to have a clear view on the tumor and vessel structure that is selected”). It’s inherent that the tumor has less transparency so user can have a view of it, while the pancreas has higher transparency. Jacob also teaches a surface mesh as shown in FIG. 7,
PNG
media_image3.png
310
549
media_image3.png
Greyscale
Applicant arguments regarding claim 19 is moot, since claim 19 is allowed.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6, 8-10, 12-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jacobs et al. (US 2025/0014690 A1) in view of Sutton et al. (US 2025/0213226 A1).
Regarding claim 1, Jacobs teaches:
A method for an image processing system, the method comprising:
receiving an image volume of an anatomy of a patient; ([0074], “The system further has an image data interface 120 for accessing medical image data of a patient from a medical image data repository, shown as a box MI, which contains one or more medical imaging datasets of patients.”)
performing a segmentation of the image volume into one or more anatomical regions; ([0078], “Next, at least one algorithm is selected from the repository based on the determined anatomical object, for example an analytical algorithm to segment the anatomical object”)
applying algorithms to the segmented image volume to detect a pathology in the anatomy; ([0103], “Next, the metrics are matched to their corresponding criteria representing clinically relevant cut-off values regarding the metrics. In this embodiment, degrees of vascular involvement of the SMV=58<90° contact with the tumor. This is repeated for other relevant criteria, e.g. superior mesenteric artery (SMA), common hepatic artery (CHA), superior mesenteric vein (SMV) and portal vein (PV). Guidance may be derived based on evaluation of the criteria as indicated in the codified guidelines, e.g. tumor is borderline resectable because of no contact with SMA, CHA, PV, and <90° contact with SMV.”) and
prior to rendering the image volume for display: ([0092], “Optionally, the processor subsystem is configured to select the algorithm from the repository containing visualization algorithms for visualization of the output data. The visualization algorithms may generate display data to show the output of the system while highlighting relevant clinical aspects, for example based on codified guidelines and available algorithm output, e.g. viewpoints in the 3D anatomical model, footprint representing contact area between segmented tumor and vessel.”) extracting information about the pathology from findings of the one or more algorithms; ([0103], “Next, the metrics are matched to their corresponding criteria representing clinically relevant cut-off values regarding the metrics. In this embodiment, degrees of vascular involvement of the SMV=58<90° contact with the tumor. This is repeated for other relevant criteria, e.g. superior mesenteric artery (SMA), common hepatic artery (CHA), superior mesenteric vein (SMV) and portal vein (PV). Guidance may be derived based on evaluation of the criteria as indicated in the codified guidelines, e.g. tumor is borderline resectable because of no contact with SMA, CHA, PV, and <90° contact with SMV.”)
calculating, customized rendering parameters for rendering the image volume on a display device based on the extracted information and one or more anatomical regions, the customized rendering parameters including: a camera angle from which the image volume is to be viewed,; ([0086], “Optionally, the processor subsystem has an algorithm matching lookup table for selecting the algorithm from the repository. The selection in the lookup table may be based on at least one of the determined anatomical object; the use case; the codified clinical guidelines; the metrics; the criteria; the guidance.”[0104], “Next, the system calls the layout rendering and interaction manager to match the guidance with the corresponding layout, camera viewpoints and levels of transparency of anatomical structures most relevant to view the guidance as stored in the algorithm visualization repository.”)
a first set of rendering parameters for the pathology, the first set of rendering parameters including a first color, a second set of rendering parameters, different than the first set of rendering parameters, for a first anatomical region of the one or more anatomical regions, the second set of rendering parameters including a second color; ([0142]-[0143], “FIG. 7 shows an example of a tumor that is in contact with an angled blood vessel. In the top part marked A, a blood vessel 310 is shown in contact with a tumor 320. Slicing directions are indicated by dashed lines, and resulting slices in left-bottom part B and in right-bottom part C. Part A shows 3D volumetric data representing an angled blood vessel (dark grey) in contact with a tumor (white).”) and
rendering the image volume on a display device, based on the customized rendering parameters.([0093]-[0094], “Optionally, the processor subsystem comprises a layout rendering and interaction manager for selecting the visualization algorithm matching the output data as provided by the analytic algorithm for assessing the clinical question. The manager may render assessment panels and provide interaction options in line with the codified guidelines and available algorithm output. The system may integrate codified guidelines containing evaluation criteria that describe relevant anatomical structures and metrics. It subsequently integrates a relevant set of algorithms that are applied to medical imaging data and provide output in line with these guidelines. The system may utilize the guidelines and algorithm output to drive the interface and interaction tools, with automatically configured layouts and viewpoints. The information may subsequently be displayed on 2D and 3D displays.”)
However, Jacobs does not, but Sutton teaches:
applying one or more artificial intelligence (AI) algorithms to …image volume to detect a pathology in the anatomy; ([0039], “Detection can also or alternatively be done automatically as described above by a neural network or deep learning software which has been trained to detect suspect pathology in images of the subject anatomy.”)
Jacob teaches detecting a pathology in an anatomy, Sutton teaches using AI technology to perform this function.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have applied the AI technology of Sutton to the pathology detecting part of the Jacob to generate more accurate detecting results.
Regarding claim 2, Jacobs in view of Sutton teaches:
The method of claim 1, wherein the extracted information includes one or more of: a position of the pathology; a severity of the pathology; a size of the pathology; dimensions of the pathology; an indication of a system of a body of the patient affected by the pathology; and an indication of an organ of the patient affected by the pathology. (Jacob [0103], “Next, the metrics are matched to their corresponding criteria representing clinically relevant cut-off values regarding the metrics. In this embodiment, degrees of vascular involvement of the SMV=58<90° contact with the tumor. This is repeated for other relevant criteria, e.g. superior mesenteric artery (SMA), common hepatic artery (CHA), superior mesenteric vein (SMV) and portal vein (PV). Guidance may be derived based on evaluation of the criteria as indicated in the codified guidelines, e.g. tumor is borderline resectable because of no contact with SMA, CHA, PV, and <90° contact with SMV.” Severity.)
Regarding claim 3, Jacobs in view of Sutton teaches:
The method of claim 1, wherein the customized rendering parameters include further parameters for rendering one or more of: a transparency/opacity of the first anatomical region or a second anatomical region of the one or more anatomical regions; a texture of a surface of the first anatomical region or a second anatomical region of the one or more anatomical regions; and a camera field of view of the image volume.(Jacob [0104], “Next, the system calls the layout rendering and interaction manager to match the guidance with the corresponding layout, camera viewpoints and levels of transparency of anatomical structures most relevant to view the guidance as stored in the algorithm visualization repository.”)
Regarding claim 6, Jacobs in view of Sutton teaches:
The method of claim 3, further comprising rendering the texture of the surface of the first anatomical region as a mesh that allows the pathology to be visible without being obscured or clouded by the first anatomical region, to indicate that the first anatomical region is affected by the pathology. (Jacob, [0181] FIG. 7)
Regarding claim 8, Jacobs in view of Sutton teaches:
The method of claim 3, wherein: in response to the second anatomical region being unaffected by the pathology, rendering the second anatomical region with a degree of transparency selected to minimize a visibility of the second anatomical region. (Jacobs [0181], “In practice, one or more of the following steps are performed: [0182] a. All non-relevant structures are removed from the 3D model. For example, a vessel that is further away and has no connection or intervening with the tumor is removed”)
Regarding claim 9, Jacobs in view of Sutton teaches:
The method of claim 3, further comprising using a plurality of rendering models to calculate the customized rendering parameters, the plurality of rendering models including: a first rendering model for determining color rendering parameters; (Jacobs [0142]-[0143], “FIG. 7 shows an example of a tumor that is in contact with an angled blood vessel. In the top part marked A, a blood vessel 310 is shown in contact with a tumor 320. Slicing directions are indicated by dashed lines, and resulting slices in left-bottom part B and in right-bottom part C. Part A shows 3D volumetric data representing an angled blood vessel (dark grey) in contact with a tumor (white).”. FIG. 1)
a second rendering model for determining transparency rendering parameters; and a third rendering model for determining texture rendering parameters. (Jacobs [0181], “In an embodiment, the processor subsystem is configured to display the 3D anatomical model while at least one of the following 3D image processing acts is performed: removing non-relevant structures from the 3D model based on the use case; making the anatomical object (semi) transparent; choosing a viewpoint that shows a suitable angle to evaluate a tumor-vessel contact; making a footprint visible to clarify the area of contact between tumor and blood vessel and trajectory length of a tumor-vessel contact; adapting lighting settings for the 3D model visualization to highlight the tumor-vessel contact. Also, a potential change in material settings of the 3D model (segments), or a different shader (visual) effect may be applied, or a change of virtual camera settings like focus depth, or additional texture and the like may be added.” FIG. 1)
Regarding claim 10, Jacobs in view of Sutton teaches:
The method of claim 9, further comprising reformatting an output of the one or more AI algorithms to a standardized input format of one or more of the first rendering model, the second rendering model, and the third rendering model. (Sutton [0039], “Detection can also or alternatively be done automatically as described above by a neural network or deep learning software which has been trained to detect suspect pathology in images of the subject anatomy. In step 140, the ultrasound and NIRS images are merged into one 3D dataset as by the process illustrated in FIG. 10. Alternately, the ultrasound and NIRS images can be shown separately, such as side-by-side or in alternation. In step 142 a region of suspect pathology is selected in the image data. This can be done manually by the clinician clicking on suspect pathology in an image with a pointing device such as a mouse or trackball or outlining the suspect pathology with the pointing device. An image of suspect pathology of one or both modalities is then displayed in step 144. Since the ultrasound and NIRS images are displayed in the common coordinate system, the coordinates of a selected image region can be used to delineate and select an image of the suspect pathology from the image data of the other modality. The suspect pathology may be highlighted in one or both of the images for quick identification and diagnosis. In step 146 diagnostic data related to the suspect pathology detected in the images is displayed to the user, as illustrated in FIG. 9. This data may be coordinate data or diagnostic data acquired by one or both of the imaging modalities such as oxyhemoglobin measurements produced by the NIRS system and Doppler flow velocities produced by Doppler techniques in the ultrasound system.” The combination of claim 1 is incorporated here. Furthermore, a formatted input enable the rendering of image a more smooth process.)
Regarding claim 12, Jacobs in view of Sutton teaches:
The method of claim 1, wherein calculating the different, customized rendering parameters for the pathology findings further comprises calculating a plurality of different combinations of customized rendering parameters, and enabling a selection of one or more combinations of the plurality of different combinations to render the image volume.( Jacobs [0181] “In an embodiment, the processor subsystem is configured to display the 3D anatomical model while at least one of the following 3D image processing acts is performed: removing non-relevant structures from the 3D model based on the use case; making the anatomical object (semi) transparent; choosing a viewpoint that shows a suitable angle to evaluate a tumor-vessel contact; making a footprint visible to clarify the area of contact between tumor and blood vessel and trajectory length of a tumor-vessel contact; adapting lighting settings for the 3D model visualization to highlight the tumor-vessel contact. Also, a potential change in material settings of the 3D model (segments), or a different shader (visual) effect may be applied, or a change of virtual camera settings like focus depth, or additional texture and the like may be added. In practice, one or more of the following steps are performed: [0182] a. All non-relevant structures are removed from the 3D model. For example, a vessel that is further away and has no connection or intervening with the tumor is removed, [0183] b. The pancreas is made transparent or partially transparent, to have a clear view on the tumor and vessel structure that is selected, [0184] c. a viewpoint is chosen, that shows the most optimal angle to evaluate the tumor-vessel contact, [0185] d. the footprint can be made visible to clarify tumor-vessel contact area and trajectory length of tumor-vessel contact, [0186] e. possibly the lighting settings or other rendering settings for the 3D model visualization are adapted to highlight the tumor-vessel contact. These settings may include for example: light position, direction and intensity, and may include multiple light sources.”)
Regarding claim 13, Jacobs in view of Sutton teaches:
The method of claim 12, wherein a first combination of customized rendering parameters is calculated for viewing the pathology, (Jacobs [0181] “In an embodiment, the processor subsystem is configured to display the 3D anatomical model while at least one of the following 3D image processing acts is performed: removing non-relevant structures from the 3D model based on the use case; making the anatomical object (semi) transparent; choosing a viewpoint that shows a suitable angle to evaluate a tumor-vessel contact; making a footprint visible to clarify the area of contact between tumor and blood vessel and trajectory length of a tumor-vessel contact; adapting lighting settings for the 3D model visualization to highlight the tumor-vessel contact. Also, a potential change in material settings of the 3D model (segments), or a different shader (visual) effect may be applied, or a change of virtual camera settings like focus depth, or additional texture and the like may be added. In practice, one or more of the following steps are performed: [0182] a. All non-relevant structures are removed from the 3D model. For example, a vessel that is further away and has no connection or intervening with the tumor is removed, [0183] b. The pancreas is made transparent or partially transparent, to have a clear view on the tumor and vessel structure that is selected, [0184] c. a viewpoint is chosen, that shows the most optimal angle to evaluate the tumor-vessel contact, [0185] d. the footprint can be made visible to clarify tumor-vessel contact area and trajectory length of tumor-vessel contact, [0186] e. possibly the lighting settings or other rendering settings for the 3D model visualization are adapted to highlight the tumor-vessel contact. These settings may include for example: light position, direction and intensity, and may include multiple light sources.”) and a second combination of customized rendering parameters is calculated for viewing a second pathology of the anatomy. (The method of Jacob can be used to detect different organ to find different pathology. When the method is applied on a different organ or pathology scenario, the combination of parameters of [0181]-[0186] are chosen and decided to provide optimal view.)
Regarding claim 14, Jacobs teaches:
An image processing system, comprising: a processor, and a memory including instructions that when executed, cause the processor to ([0191], “As illustrated in FIG. 10, instructions for the computer, e.g., executable code, may be stored on a computer readable medium 800,”):
receive an image volume of a patient from a medical imaging system; ([0074], “The system further has an image data interface 120 for accessing medical image data of a patient from a medical image data repository, shown as a box MI, which contains one or more medical imaging datasets of patients.”)
detect a pathology in the image volume ([0103], “Next, the metrics are matched to their corresponding criteria representing clinically relevant cut-off values regarding the metrics. In this embodiment, degrees of vascular involvement of the SMV=58<90° contact with the tumor. This is repeated for other relevant criteria, e.g. superior mesenteric artery (SMA), common hepatic artery (CHA), superior mesenteric vein (SMV) and portal vein (PV). Guidance may be derived based on evaluation of the criteria as indicated in the codified guidelines, e.g. tumor is borderline resectable because of no contact with SMA, CHA, PV, and <90° contact with SMV.”) and
prior to rendering the image volume for display: ([0092], “Optionally, the processor subsystem is configured to select the algorithm from the repository containing visualization algorithms for visualization of the output data. The visualization algorithms may generate display data to show the output of the system while highlighting relevant clinical aspects, for example based on codified guidelines and available algorithm output, e.g. viewpoints in the 3D anatomical model, footprint representing contact area between segmented tumor and vessel.”) extract clinical information about the pathology from an output of the algorithms; ([0103], “Next, the metrics are matched to their corresponding criteria representing clinically relevant cut-off values regarding the metrics. In this embodiment, degrees of vascular involvement of the SMV=58<90° contact with the tumor. This is repeated for other relevant criteria, e.g. superior mesenteric artery (SMA), common hepatic artery (CHA), superior mesenteric vein (SMV) and portal vein (PV). Guidance may be derived based on evaluation of the criteria as indicated in the codified guidelines, e.g. tumor is borderline resectable because of no contact with SMA, CHA, PV, and <90° contact with SMV.”)
calculate customized rendering parameters for each of the pathology and each of the one or more organ and/or systems surrounding the pathology, based on the extracted clinical information; ([0086], “Optionally, the processor subsystem has an algorithm matching lookup table for selecting the algorithm from the repository. The selection in the lookup table may be based on at least one of the determined anatomical object; the use case; the codified clinical guidelines; the metrics; the criteria; the guidance.”[0104], “Next, the system calls the layout rendering and interaction manager to match the guidance with the corresponding layout, camera viewpoints and levels of transparency of anatomical structures most relevant to view the guidance as stored in the algorithm visualization repository.”)
wherein the customized rendering parameters include a first, lower transparency for the pathology, a second, higher transparency for a first organ affected by the pathology,([0181], “The pancreas is made transparent or partially transparent, to have a clear view on the tumor and vessel structure that is selected”) and a surface mesh for the first organ;(FIG. 7, A, the different surface mesh.) and
render the image volume on a display device, based on the customized rendering parameters.([0093]-[0094], “Optionally, the processor subsystem comprises a layout rendering and interaction manager for selecting the visualization algorithm matching the output data as provided by the analytic algorithm for assessing the clinical question. The manager may render assessment panels and provide interaction options in line with the codified guidelines and available algorithm output. The system may integrate codified guidelines containing evaluation criteria that describe relevant anatomical structures and metrics. It subsequently integrates a relevant set of algorithms that are applied to medical imaging data and provide output in line with these guidelines. The system may utilize the guidelines and algorithm output to drive the interface and interaction tools, with automatically configured layouts and viewpoints. The information may subsequently be displayed on 2D and 3D displays.”)
However, Jacobs does not, but Sutton teaches:
detect a pathology in the image volume using an artificial intelligence (AI) algorithm; ([0039], “Detection can also or alternatively be done automatically as described above by a neural network or deep learning software which has been trained to detect suspect pathology in images of the subject anatomy.”)
Jacob teaches detecting a pathology in an anatomy, Sutton teaches using AI technology to perform this function.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have applied the AI technology of Sutton to the pathology detecting part of the Jacob to generate more accurate detecting results.
Regarding claim 15, Jacobs in view of Sutton teaches:
The image processing system of claim 14, wherein further instructions are stored in the memory that when executed, cause the processor to reformat the output of the AI algorithm to a standardized input format of a plurality of rendering models of the image processing system, (Sutton [0039], “Detection can also or alternatively be done automatically as described above by a neural network or deep learning software which has been trained to detect suspect pathology in images of the subject anatomy. In step 140, the ultrasound and NIRS images are merged into one 3D dataset as by the process illustrated in FIG. 10. Alternately, the ultrasound and NIRS images can be shown separately, such as side-by-side or in alternation. In step 142 a region of suspect pathology is selected in the image data. This can be done manually by the clinician clicking on suspect pathology in an image with a pointing device such as a mouse or trackball or outlining the suspect pathology with the pointing device. An image of suspect pathology of one or both modalities is then displayed in step 144. Since the ultrasound and NIRS images are displayed in the common coordinate system, the coordinates of a selected image region can be used to delineate and select an image of the suspect pathology from the image data of the other modality. The suspect pathology may be highlighted in one or both of the images for quick identification and diagnosis. In step 146 diagnostic data related to the suspect pathology detected in the images is displayed to the user, as illustrated in FIG. 9. This data may be coordinate data or diagnostic data acquired by one or both of the imaging modalities such as oxyhemoglobin measurements produced by the NIRS system and Doppler flow velocities produced by Doppler techniques in the ultrasound system.” The combination of claim 14 is incorporated here. Furthermore, a formatted input enable the rendering of image a more smooth process.)
each rendering model of the plurality of rendering models used to calculate one or more customized rendering parameters relating to one of: a transparency/opacity of a respective organ or system; a color of the pathology; a color of a respective organ, or system; and a texture of a surface of a respective organ or system. (Jacobs [0181], “In an embodiment, the processor subsystem is configured to display the 3D anatomical model while at least one of the following 3D image processing acts is performed: removing non-relevant structures from the 3D model based on the use case; making the anatomical object (semi) transparent; choosing a viewpoint that shows a suitable angle to evaluate a tumor-vessel contact; making a footprint visible to clarify the area of contact between tumor and blood vessel and trajectory length of a tumor-vessel contact; adapting lighting settings for the 3D model visualization to highlight the tumor-vessel contact. Also, a potential change in material settings of the 3D model (segments), or a different shader (visual) effect may be applied, or a change of virtual camera settings like focus depth, or additional texture and the like may be added.”)
Regarding claim 16, Jacobs in view of Sutton teaches:
The image processing system of claim 14, wherein the customized rendering parameters further include the second transparency or a third transparency for a second organ unaffected by the pathology. (Jacob, [0181], “The pancreas is made transparent or partially transparent, to have a clear view on the tumor and vessel structure that is selected”)
Regarding claim 17, Jacobs in view of Sutton teaches:
The image processing system of claim 16, wherein the customized rendering parameters further include a first color for the pathology, a second color for the first organ, and a third color for the second organ. (Jacob, [0142]-[0143], “FIG. 7 shows an example of a tumor that is in contact with an angled blood vessel. In the top part marked A, a blood vessel 310 is shown in contact with a tumor 320. Part A shows 3D volumetric data representing an angled blood vessel (dark grey) in contact with a tumor (white).” Jacob teaches mark pathology with one color, the affected organ with second color. It would be a design choice to use a third color for an unaffected organ. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have the teachings of Jacob in view of Sutton with the design choice to help users to easily visualize parts in an image with different conditions.)
Regarding claim 18, Jacobs in view of Sutton teaches:
The image processing system of claim 14, wherein the customized rendering parameters include a camera angle for displaying the image volume on the display, the camera angle calculated to minimize an overlap between the pathology and the one or more organs and/or systems surrounding the pathology. (Jacobs [0181], “In an embodiment, the processor subsystem is configured to display the 3D anatomical model while at least one of the following 3D image processing acts is performed: removing non-relevant structures from the 3D model based on the use case; making the anatomical object (semi) transparent; choosing a viewpoint that shows a suitable angle to evaluate a tumor-vessel contact;” [0184], “c. a viewpoint is chosen, that shows the most optimal angle to evaluate the tumor-vessel contact,”)
Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jacobs in view of Sutton and further in view of Ash et al. (US 2013/0158968 A1).
Regarding claim 4, Jacobs in view of Sutton teaches:
The method of claim 1,
However, Jacobs in view of Sutton does not, but Ash teaches:
wherein the first color is selected or calculated to indicate a severity the pathology, and the second color is selected or calculated to indicate whether the first anatomical region is affected by the pathology. ([0061], “Additionally, the second body-image representation 314 may be outlined with a certain color to indicate the increased risk of Diabetes Mellitus Type II with the increased weight gain. Additionally, visual indicators may appear that reflect organs and/or systems affected by the weight gain. Element 510 of FIG. 5 illustrates a slidable control that may be used to modify the variable 320.” [0075], “Further, the visual indicators 412 may be color-coded to indicate a status of the affected organ and/or system. Various colors could be used to indicate, for example, a "critical" status, a "currently managing" status, a "needs attention" status, and an "at risk" status.”)
Jacobs in view of Sutton teaches using different color to represent different regions, in which may or may not be pathology region. Ash teaches using colors to represent affected region and different color can represent different severity.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Jacobs in view Sutton with the specific teachings of Ash to provide users more information.
Regarding claim 5, Jacobs in view of Sutton and Ash teaches:
The method of claim 4, wherein: the first color is selected from a first color gradient between a first reference color indicating a higher severity of the pathology, and a second reference color indicating a lower severity of the pathology; and the second color is selected from a second color gradient between a third reference color indicating that the first anatomical region is more affected by the pathology, and a fourth reference color indicating that the first anatomical region is less affected by the pathology. (Ash, shown in claim 4 above, teaches using first color to indicate severity of the pathology, a second color to indicate region being affected by the pathology. It would be a design choice as to how to choose a first color and a second color, e.g. using different color gradient between an upper color limit and a bottom color limit to indicate severity level, e.g. 10 -0, and using different color gradient between an upper color limit and a bottom color limit to indicate the affected level of a region, e.g. 10 -0. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Jacobs in view Sutton and Ash. The benefit would be to give viewers more detailed and accurate information about the status of a pathology region. )
Allowable Subject Matter
Claim 7, 11 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: none of the references along or in combination teaches the limitations of “using a single rendering model to calculate the customized rendering parameters, the single rendering model performing a global grid search on a set of rendering parameters, and iteratively calculating individual visibility scores of each combination of rendering parameters of the set of rendering parameters; wherein the image volume is rendered in accordance with a combination of rendering parameters having a highest visibility score.” Recited in claim 11 and “calculating the camera angle further comprises: performing a grid search over a plurality of camera angles at set increments, and calculating a visibility score for each camera angle of the plurality of camera angles; and selecting a camera angle with a highest visibility score. ” recited in claim 7.
Claims 19 are allowed.
The following is an examiner’s statement of reasons for allowance: none of the references along or in combination teaches the limitations recited in claim 19 as a whole.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YANNA WU/ Primary Examiner, Art Unit 2615