Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejection – 35 USC § 112 (b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the medical image" in line 10. There is insufficient antecedent basis for this limitation in the claim. In the interest of compact prosecution, examiner will interpret “the medical image” as a medical image.
Claim Rejection – 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4, 11, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (US 20230115732 A1), hereinafter referenced as Brynolfsson.
Regarding claim 1, Brynolfsson teaches the following:
A medical image display method, comprising:
"methods for creation, analysis, and/or presentation of medical image data." (Brynolfsson, [02])
receiving a plurality of medical reference images corresponding to a plurality of image sources in different formats;
"(c) receiving, by the processor, a 3D pelvic atlas image comprising [12] … the 3D pelvic atlas image received at step ( c) is selected from a set of multiple prospective 3D pelvic atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.) [16]" (Brynolfsson, [12, 16])
The multiple prospective 3D pelvic atlas images received by the processor read on receiving a series of reference images.
performing image co-registration and image segmentation based on the plurality of medical reference images to generate at least one co-registered image mask;
"(b) segmenting, by the processor, the 3D anatomical image of the subject to identify representations of one or more pelvic bones within the 3D anatomical image of the subject, thereby creating a 3D segmentation map aligned with the 3D anatomical image of the subject ...(d) transforming ( e.g., applying a coordinate transform to), by the processor, the 3D pelvic atlas image to co-register it with the 3D segmentation map using (i) the one or more reference pelvic bone regions identified within the pelvic atlas image and (ii) the one or more pelvic bone regions of the 3D segmentation map ( e.g., as landmarks), thereby creating a transformed 3D pelvic atlas image comprising the identified one or more pelvic lymph sub-regions thereby aligned to the 3D anatomical image and segmentation" (Brynolfsson, [12])
Brynolfsson teaches performing co-registration (alignment) and segmentation based on the plurality of atlas images to generate a transformed atlas image.
performing an image overlay according to the at least one co-registered image mask and at least one of the plurality of medical reference images to generate an overlay image;
"by transforming the pelvic atlas image to register it with the segmentation map, the pelvic lymph regions of the transformed pelvic atlas image are thereby aligned to the 3D anatomical image. Where the 3D anatomical image is (e.g., also) aligned with a 3D functional image, such as a PET image (e.g., as in a PET/CT composite image), hotspots within the 3D functional image can be identified as being located within, overlapping with and/or in close proximity to particular pelvic lymph regions." (Brynolfsson, [11])
Brynolfsson teaches performing an image overlay according to the at least one transformed atlas image and at least one of the plurality of atlas image to generate an 3D anatomical image.
However, Brynolfsson does not teach displaying a medical image according to the overlay image in the present embodiment in the same embodiment.
In a later embodiment, Brynolfsson does teach the following:
and displaying the medical image according to the overlay image.
"Segmentation maps and masks may also be displayed, for example as a graphical representation overlaid on a medical image to guide physicians and other medical practitioners." (Brynolfsson, [151])
Segmentation maps and mask comprise of segmented medical images and atlas images. Brynolfsson teaches segmentation maps and mask can be displayed in accordance to an overlay.
It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the primary embodiment of Brynolfsson to display the medical image according to the overlay image as taught in the later embodiment of Brynolfsson to improve identifying and diagnosing medical illnesses and otherwise motivating experimentation and optimization.
Claim(s) 11 is/are rejected using the same rationale or bases as applied to claim 1 and the structure mentioned.
Additionally, claim 11 recites the following structure:
An electronic device, comprising: a display; and a storage device configured to store at least one image segmentation model; and a processor coupled to the display and the storage device
“In certain embodiments, systems and methods described herein utilize an approach that combines direct (e.g., machine learning-based) segmentation of a 3D anatomical image with an atlas image approach [10] … Machine learning module: As used herein, the term “machine learning module” refers to a computer implemented process (e.g., function) that implements one or more specific machine learning algorithms in order to determine, for a given input (such as an image (e.g., a 2D image; e.g., a 3D image), dataset, and the like) one or more output values [123] … The computing device 3100 includes a processor 3102 , a memory 3104 , a storage device 3106 , a high-speed interface 3108 connecting to the memory 3104 and multiple high-speed expansion ports 3110 , and a low-speed interface 3112 connecting to a low-speed expansion port 3114 and the storage device 3106 . Each of the processor 3102 , the memory 3104 , the storage device 3106 , the high-speed interface 3108 , the high-speed expansion ports 3110 , and the low-speed interface 3112 , are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 3102 can process instructions for execution within the computing device 3100 , including instructions stored in the memory 3104 or on the storage device 3106 to display graphical information for a GUI on an external input/output device, such as a display 3116 coupled to the high-speed interface 3108" (Brynolfsson, [287])
"segmenting, by the processor" (Brynolfsson, [12])
"In certain embodiments, systems and methods described herein utilize an approach that combines direct (e.g., machine learning-based) segmentation of a 3D anatomical image with an atlas image approach [10] … Machine learning module: As used herein, the term “machine learning module” refers to a computer implemented process (e.g., function) that implements one or more specific machine learning algorithms in order to determine, for a given input (such as an image (e.g., a 2D image; e.g., a 3D image), dataset, and the like) one or more output values" (Brynolfsson, [10, 123])
Brynolfsson teaches a computing device which includes a processor, a storage device, and a display. The processor executes instructions for segmentation. The storage device stores instructions which are to be executed by the processor. Thus, the storage device stores at least one image segmentation model.
Regarding claim 4, Brynolfsson teaches the method of claim 1 and the following:
inputting the one of the plurality of medical reference images to a plurality of corresponding image segmentation models to generate a plurality of image masks;
"In certain embodiments, systems and methods described herein utilize an approach that combines direct (e.g., machine learning-based) segmentation of a 3D anatomical image with an atlas image approach [10] … segmenting, by the processor, the 3D anatomical image of the subject to identify representations of one or more pelvic bones within the 3D anatomical image of the subject, thereby creating a 3D segmentation map… an identification (e.g., one or more segmentation masks; e.g., one or more 3D segmentation masks, e.g., a segmentation map) of one or more pelvic lymph sub-regions in the 3D pelvic atlas image [12] … Machine learning module: As used herein, the term “machine learning module” refers to a computer implemented process (e.g., function) that implements one or more specific machine learning algorithms in order to determine, for a given input (such as an image (e.g., a 2D image; e.g., a 3D image), dataset, and the like) one or more output values [123]" (Brynolfsson, [10, 12, 123])
Brynolfsson teaches inputting the one of the plurality of atlas images to a plurality of corresponding machine learning algorithms to generate a plurality of segmentation maps/masks.
and using the one of the plurality of medical reference images as a reference image,
"atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.) [16] ... a reference pelvic bone region comprising the one or more reference pelvic bone regions identified within the particular prospective pelvic atlas image" (Brynolfsson, [16, 23])
Brynolfsson teaches using the one of the plurality of atlas images as a reference image.
and performing co-registration on at least another one of the plurality of image masks to the reference image to generate the at least one co-registered image mask.
"the 3D pelvic atlas image to co-register it with the 3D segmentation map using (i) the one or more reference pelvic bone regions identified within the pelvic atlas image and (ii) the one or more pelvic bone regions of the 3D segmentation map (e.g., as landmarks), thereby creating a transformed 3D pelvic atlas image" (Brynolfsson, [29])
Brynolfsson teaches performing co-registration on at least another one of the plurality of segmentation masks to the reference image to generate the at least one transformed atlas image.
However, Brynolfsson fail to teach all in a single embodiment.
It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the primary embodiment of Brynolfsson to use the one of the plurality of atlas images as a reference image and perform co-registration on at least another one of the plurality of segmentation masks to the reference image to generate the at least one transformed atlas image as taught in the other embodiment of Brynolfsson improve identifying medical issues/illness and otherwise motivating experimentation and optimization.
Claim(s) 14 is/are rejected usring the same rationale or bases as applied to claim 4.
Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (US 20230115732 A1), hereinafter referenced as Brynolfsson, in view of Kim (Jointly Aligning and Segmenting Multiple Web Photo Streams), hereinafter referenced as Kim.
Regarding claim 2, Brynolfsson teaches the method of claim 1 and the following:
using the one of the plurality of medical reference images as a reference image,
"atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.) [16] ... a reference pelvic bone region comprising the one or more reference pelvic bone regions identified within the particular prospective pelvic atlas image" (Brynolfsson, [16, 23])
Brynolfsson teaches using the one of the plurality of atlas images as a reference image.
image segmentation model
"In certain embodiments, systems and methods described herein utilize an approach that combines direct (e.g., machine learning-based) segmentation of a 3D anatomical image with an atlas image approach [10] … Machine learning module: As used herein, the term “machine learning module” refers to a computer implemented process (e.g., function) that implements one or more specific machine learning algorithms in order to determine, for a given input (such as an image (e.g., a 2D image; e.g., a 3D image), dataset, and the like) one or more output values" (Brynolfsson, [10, 123])
Brynolfsson teaches segmenting image via a machine learning module.
However, Brynolfsson fail to teach preforming co-registration on images followed by segmenting such image to create one co-registered image mask.
Kim teaches the following:
performing co-registration on at least another one of the plurality of medical reference images to the reference image; and inputting at least one co-registered medical reference image to at least one corresponding image segmentation model to generate the at least one co-registered image mask.
"a method to jointly perform alignment of multiple photo streams and cosegmentation of aligned images, as shown in Fig.1. In the alignment step, images of different photo sets are matched based on visual contents and associated meta-data. The alignment is a core task to build a big picture of storylines from a large number of fragmented photo streams of individual users. In the cosegmentation step, the aligned images are segmented together in order to facilitate image understanding such as pixel-level classification in the images.” (Kim, page 2)
Kim teaches performing co-registration, (alignment) on at least another one of the plurality of images to another image; and segmenting at least one the aligned images to generate the at least one aligned segmented images.
Kim BASE is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely image processing. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to perform co-registration, (alignment) on at least another one of the plurality of images to another image; and segment at least one the aligned images to generate the at least one aligned segmented images as taught by Kim in order to improve grouping and visualization.
Claim(s) 12 is/are rejected using the same rationale or bases as applied to claim 2.
Claim(s) 3, 5, 13, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (US 20230115732 A1), hereinafter referenced as Brynolfsson, in view Kim (Jointly Aligning and Segmenting Multiple Web Photo Streams), hereinafter referenced as Kim in further view of Kong (CN104933672A), hereinafter referenced as Kong.
Regarding claim 3, Brynolfsson teaches the method of claim 2 and the following:
reference images
"atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.) [16] ... a reference pelvic bone region comprising the one or more reference pelvic bone regions identified within the particular prospective pelvic atlas image" (Brynolfsson, [16, 23])
Brynolfsson teaches using the one of the plurality of atlas images as a reference image.
and performing co-registration on the at least another one of the plurality of medical reference images to the reference image.
"coarse registration comprises determining a registration transformation that that aligns (i) a reference pelvic bone region comprising the one or more reference pelvic bone regions identified within the 3D pelvic atlas image [e.g., a reference pelvic bone mask and/or distance map created therefrom that represent a combined pelvic bone region comprising the one or more reference pelvic bones (together)] to (ii) a target pelvic bone region comprising the one or more pelvic bone regions of the 3D segmentation map [e.g., a reference pelvic bone mask and/or distance map created therefrom that represent a combined pelvic bone region comprising the one or more reference pelvic bone regions (together)]; " (Brynolfsson, [22])
Brynolfsson teaches performing co-registration (alignment) on the at least another one of the plurality of atlas images to the reference image. Moreover, Brynolfsson shows how to align and co-register medical images to a reference pelvic bone region to a target pelvic bone region which can consist of one or more reference pelvic bone regions.
However, Brynolfsson fail to teach how to adjust the image resolution of two medical images to be the same.
Kong teaches the following:
adjusting an image resolution of the at least another one of the plurality of medical reference images to be the same as an image resolution of the reference image;
"fast convex optimization algorithm includes the following steps: adjusting the resolution of ultrasound and CT images to be the same;" (Kong, [1])
Kong teaches adjusting an image resolution of the at least another one of the plurality of medical reference images to be the same as an image resolution of the reference image.
Kong BASE is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to adjust an image resolution of the at least another one of the plurality of medical reference images to be the same as an image resolution of the reference image as taught by Kong in order to improve the accuracy of the medical image.
Claim(s) 13 is/are rejected using the same rationale or bases as applied to claim 3.
Regarding claim 5, Brynolfsson teaches the method of claim 4 and the following:
reference images
"atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.) [16] ... a reference pelvic bone region comprising the one or more reference pelvic bone regions identified within the particular prospective pelvic atlas image" (Brynolfsson, [16, 23])
Brynolfsson teaches using the one of the plurality of atlas images as a reference image.
image masks
"segmenting, by the processor, the 3D anatomical image of the subject to identify representations of one or more pelvic bones within the 3D anatomical image of the subject, thereby creating a 3D segmentation map… an identification (e.g., one or more segmentation masks; e.g., one or more 3D segmentation masks [12]" (Brynolfsson, [10, 12])
Brynolfsson teaches that atlas images can be segmented to create segmentation maps/masks.
and performing the co-registration on the at least another one of the plurality of image masks to the reference image.
"the 3D pelvic atlas image to co-register it with the 3D segmentation map using (i) the one or more reference pelvic bone regions identified within the pelvic atlas image and (ii) the one or more pelvic bone regions of the 3D segmentation map " (Brynolfsson, [29])
Brynolfsson teaches performing the co-registration on the at least another one of the plurality of segmentation map to the reference image.
However, Brynolfsson fail to teach how to adjust the image resolution of two medical images to be the same.
Kong teaches the following:
adjusting an image resolution of the at least another one of the plurality of image masks to be the same as an image resolution of the reference image;
"fast convex optimization algorithm includes the following steps: adjusting the resolution of ultrasound and CT images to be the same;" (Kong, [1])
Kong teaches adjusting an image resolution of the at least another one of the plurality of medical reference images to be the same as an image resolution of the reference image.
Kong BASE is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to adjust an image resolution of the at least another one of the plurality of medical reference images to be the same as an image resolution of the reference image as taught by Kong in order to improve the accuracy of the medical image.
Claim(s) 5 is/are rejected using the same rationale or bases as applied to claim 15.
Claim(s) 6, 7, 8, 16, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (US 20230115732 A1), hereinafter referenced as Brynolfsson, in view of Shanbhag et al. (US 20240029415 A1), hereinafter referenced as Shanbhag.
Regarding claim 6, Brynolfsson teaches the method of claim 1 and the following:
co-registered image mask
"the 3D pelvic atlas image to co-register it with the 3D segmentation map using (i) the one or more reference pelvic bone regions identified within the pelvic atlas image and (ii) the one or more pelvic bone regions of the 3D segmentation map (e.g., as landmarks), thereby creating a transformed 3D pelvic atlas image" (Brynolfsson, [29])
Brynolfsson teaches performing co-registration (alignment) and segmentation based on the plurality of atlas images to generate a transformed atlas image, which reads on a co-registered image mask.
medical reference images
"the 3D pelvic atlas image received at step ( c) is selected from a set of multiple prospective 3D pelvic atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.)" (Brynolfsson, [16])
Brynolfsson teaches of an atlas image which comprise of different medical images.
However, Brynolfsson fail to explicitly teach overlaying a mask with a reference image.
Shanbhag teaches the following:
performing the image overlay according to image mask and the plurality of medical reference images to generate the overlay image.
"FIG. 9, an anatomy mask may be overlaid on the reference case" (Shanbhag , [98])
Shanbhag teaches performing the image overlay according to anatomy mask and the reference case to generate the overlay image, as showed in FIG. 9 below.
PNG
media_image1.png
605
959
media_image1.png
Greyscale
Shanbhag BASE is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to perform the image overlay according to anatomy mask and the reference case to generate the overlay image as taught by Shanbhag in order to improve medical accuracy.
Claim(s) 16 is/are rejected using the same rationale or bases as applied to claim 6.
Regarding claim 7, Brynolfsson teaches the method of claim 1 and the following:
co-registered image masks
"the 3D pelvic atlas image to co-register it with the 3D segmentation map using (i) the one or more reference pelvic bone regions identified within the pelvic atlas image and (ii) the one or more pelvic bone regions of the 3D segmentation map (e.g., as landmarks), thereby creating a transformed 3D pelvic atlas image" (Brynolfsson, [29])
"Turning to FIG. 7 in certain embodiments, step 714 is repeated for multiple pelvic atlas images—that is, multiple pelvic atlas images 712 may be co-registered with segmentation map 710." (Brynolfsson, [197])
PNG
media_image2.png
728
1214
media_image2.png
Greyscale
Brynolfsson teaches performing co-registration (alignment) and segmentation based on the plurality of atlas images to generate a transformed atlas image, which reads on a co-registered image mask. Brynolfsson also teaches these steps can be repeated to create multiple a transformed atlas images, which reads on co-registered image masks.
medical reference images
"the 3D pelvic atlas image received at step ( c) is selected from a set of multiple prospective 3D pelvic atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.)" (Brynolfsson, [16])
Brynolfsson teaches of an atlas image which comprise of different medical images.
However, Brynolfsson fail to explicitly teach overlaying a mask with a reference image.
Shanbhag teaches the following:
performing the image overlay according to a plurality masks and the one of the plurality of medical reference images to generate the overlay image.
"FIG. 9, an anatomy mask may be overlaid on the reference case" (Shanbhag , [98])
Shanbhag teaches performing the image overlay according to anatomy mask and the reference case to generate the overlay image, as showed in FIG. 9 below.
PNG
media_image1.png
605
959
media_image1.png
Greyscale
Shanbhag is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to perform the image overlay according to anatomy mask and the reference case to generate the overlay image as taught by Shanbhag in order to improve medical accuracy.
Claim(s) 17 is/are rejected using the same rationale or bases as applied to claim 7.
Regarding claim 8, Brynolfsson teaches the method of claim 1 and the following:
co-registered image masks
"the 3D pelvic atlas image to co-register it with the 3D segmentation map using (i) the one or more reference pelvic bone regions identified within the pelvic atlas image and (ii) the one or more pelvic bone regions of the 3D segmentation map (e.g., as landmarks), thereby creating a transformed 3D pelvic atlas image" (Brynolfsson, [29])
"Turning to FIG. 7 in certain embodiments, step 714 is repeated for multiple pelvic atlas images—that is, multiple pelvic atlas images 712 may be co-registered with segmentation map 710." (Brynolfsson, [197])
PNG
media_image2.png
728
1214
media_image2.png
Greyscale
Brynolfsson teaches performing co-registration (alignment) and segmentation based on the plurality of atlas images to generate a transformed atlas image, which reads on a co-registered image mask. Brynolfsson also teaches these steps can be repeated to create multiple a transformed atlas images, which reads on co-registered image masks.
medical reference images
"the 3D pelvic atlas image received at step ( c) is selected from a set of multiple prospective 3D pelvic atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.)" (Brynolfsson, [16])
Brynolfsson teaches of an atlas image which comprise of different medical images.
However, Brynolfsson fail to explicitly teach overlaying a mask with a reference image.
Shanbhag teaches the following:
performing the image overlay according to a plurality masks and the one of the plurality of medical reference images to generate the overlay image.
"FIG. 9, an anatomy mask may be overlaid on the reference case" [98]
Shanbhag teaches performing the image overlay according to anatomy mask and the reference case to generate the overlay image, as showed in FIG. 9 below.
PNG
media_image1.png
605
959
media_image1.png
Greyscale
Shanbhag BASE is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to perform the image overlay according to anatomy mask and the reference case to generate the overlay image as taught by Shanbhag in order to improve medical accuracy.
Claim(s) 18 is/are rejected using the same rationale or bases as applied to claim 8.
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (US 20230115732 A1), hereinafter referenced as Brynolfsson, in view of Shanbhag et al. (US 20240029415 A1), hereinafter referenced as Shanbhag and in view of Ou (CN109157284A), hereinafter referenced as Ou.
Regarding claim 9, Brynolfsson teaches the method of claim 1 and the following:
medical images
"the 3D pelvic atlas image received at step (c) is selected from a set of multiple prospective 3D pelvic atlas images (e.g., multiple different 3D pelvic atlas image options, e.g., reflecting different subject body types, height/weights ranges, etc.)" (Brynolfsson, [16])
Brynolfsson teaches of an atlas image which comprise of different medical images.
However, Brynolfsson fails to teach an overlay image and the calculations needed for displaying an image in accordance with a three-dimensional display.
Shanbhag teaches of an overlay image. Shanbhag teaches the following:
the overlay image.
"FIG. 9, an anatomy mask may be overlaid on the reference case" [98]
Shanbhag teaches of an image overlay according to anatomy mask and the reference case, as showed in FIG. 9 below.
PNG
media_image1.png
605
959
media_image1.png
Greyscale
Shanbhag is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to perform the image overlay according to anatomy mask and the reference case to generate the overlay image as taught by Shanbhag in order to improve medical accuracy.
Brynolfsson in view of Shanbhag fails to teach the calculations needed for displaying an image in accordance with a three-dimensional display.
Ou teaches the following:
the image is volume data, and displaying the image
"In this embodiment, the volume data to be 3D modeled is obtained from the preprocessed image...synthesized with naked-eye stereoscopic imaging. A three-dimensional model of the brain can be seen on a naked-eye 3D display." (Ou, [73])
Ou teaches the preprocessed image is volume data, and displaying the image on a naked-eye 3D display.
calculating a plurality of light paths corresponding to a three-dimensional display;
"The ray simulation of the data extraction is that a ray is projected from the viewpoint position onto the bounding box. The ray passing through the bounding box is equivalent to the ray passing through the volume data." (Ou, [73])
Ou teaches calculating a plurality of rays corresponding to a bounding box.
matching the plurality of light paths with eye coordinates to determine a plurality of light projection paths;
"the direction of the ray projection can be uniquely determined based on the viewpoint position and the point coordinates on the surface of the bounding box, thereby determining the incident ray." (Ou, [73])
Ou teaches matching the plurality of rays with viewpoint positions to determine a plurality of incident rays.
determining a plurality of sampling data corresponding to a plurality of pixels of the three-dimensional display according to the overlay image and the plurality of light projection paths
" In this embodiment, the volume data to be 3D modeled is obtained from the preprocessed image ... the direction of the ray projection can be uniquely determined based on the viewpoint position and the point coordinates on the surface of the bounding box, thereby determining the incident ray … Then, the light is sampled. The so-called light resampling means sampling the light between the incident point and the exit point of the bounding box. During sampling, the color value and opacity value of each sampling point are calculated. Finally, the three-dimensional image is obtained by image synthesis" (Ou, [73])
Ou teaches determining a plurality of sampling data corresponding to a plurality of color value and opacity value of each sampling point of the bounding box according to the preprocessed image and the plurality of incident rays.
to generate display data; and displaying the medical image according to the display data.
"The depth calculation of the model is performed by the calculation plugin and synthesized with naked-eye stereoscopic imaging. A three-dimensional model of the brain can be seen on a naked-eye 3D display. " (Ou, [73])
Ou teaches generating naked-eye stereoscopic imaging; and displaying three-dimensional model of the brain can be seen on a naked-eye 3D display.
Ou is analogous art with respect to Brynolfsson in view of Shanbhag because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson in view of Shanbhag to use the preprocessed image the volume data, and display the image on a naked-eye 3D display; to calculate a plurality of rays corresponding to a bounding box; to match the plurality of rays with viewpoint positions to determine a plurality of incident rays; to determine a plurality of sampling data corresponding to a plurality of color value and opacity value of each sampling point of the bounding box according to the preprocessed image and the plurality of incident rays; and generate naked-eye stereoscopic imaging; and to display three-dimensional model of the brain can be seen on a naked-eye 3D display as taught by Ou in order to improve upon display data and diagnostic information.
Claim(s) 19 is/are rejected using the same rationale or bases as applied to claim 9.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Brynolfsson et al. (US 20230115732 A1), hereinafter referenced as Brynolfsson, in view of Ou (CN109157284A), hereinafter referenced as Ou.
Regarding claim 10, Brynolfsson teaches the method of claim 1.
However, Brynolfsson fails to teach a displaying a medical image by a naked-eye three-dimensional display.
But, Ou does. Ou teaches the following:
the medical image is displayed by a naked-eye three-dimensional image display.
"A three-dimensional model of the brain can be seen on a naked-eye 3D display." (Ou, [73])
Ou teaches a three-dimensional model of the brain is displayed by a naked-eye three-dimensional image display.
Ou BASE is analogous art with respect to Brynolfsson because they are from the same field of endeavor, namely medical imaging. Before the effective filling date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Brynolfsson to use a three-dimensional model of the brain to be displayed by a naked-eye three-dimensional image display in order to improve upon display data and diagnostic information.
Claim(s) 20 is/are rejected using the same rationale or bases as applied to claim 10.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUNE N NGUYEN whose telephone number is (571)272-8919. The examiner can normally be reached M-TH 7:00AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at (571) 272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUNE NGOC NGUYEN/Examiner, Art Unit 2618
/DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618