Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 6 and 17-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2016/0163048 A1 to Yee et al., hereinafter, “Yee”.
Claim 1. Yee teaches A method comprising: [Abstract] the computer system segments the colon into subsegments based on an articulated object model that fits a tortuosity of the colon along a centerline of the colon, where the articulated object model, FIG. 38
[0017] Another embodiment provides a method, which may be performed by the computer system.
receiving patient imaging comprising one or more voxels; [0169] process the CTC case images to extract a 3D model of the colon lumen Examiner understands 3D to be voxels.
determining a lumen indicator based on the patient imaging; [0071] identify landmarks or reference objects in the data (such as anatomical features)
[0169] process the CTC case images to extract a 3D model of the colon lumen
representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; [0181] each voxel looks at the neighboring voxels above and below its current y position… to complete the colon lumen, [0182]
[0188] given a centerline and for any point on the centerline, define both a forward direction vector using this point, and its next neighbor point, and a backward direction vector using this point …Examiner understands the centerline to be the centerline of the colon lumen.
[0213] Table 4 presents values of the pre-specified distance from the centerline (D) in different segments of the colon… Examiner interprets values to be feature vectors.
generating, based on the one or more feature vectors, a cluster comprising at least one of the one or move voxels; [0071] … identify 3D objects in the data (such as the colon and, more generally, groups of voxels).
[0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values... Examiner interprets voxel values to be feature vectors.
binarizing the cluster into one or more groups based on a threshold value; [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values..., [0123]
and generating a bowel segment model based at least on the cluster. [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values...
[0125] using the organ binary mask a ray-casting technique can be applied to generate a volume image of the organ of interest, such as the liver or another solid organ. Examiner understands the organ binary mask to be the binary image defining the shape of organ and understands the organ of interest or another solid organ to be the lumen (bowel).
Claim 2. Yee further teaches wherein: the one or more groups comprises at least a positive group and a negative group; [0122-0123] In the probability-mapping technique, a probability map (P) is generated using a 3D image…The values of P may be the (estimated) probability of voxels being inside, outside and at the edge of the organ of interest. Examiner interprets voxels being inside to be positive and voxels being outside to be negative.
voxels in the positive group are included in the bowel segment model; [0123] In the probability-mapping technique, a probability map (P) is generated using a 3D image…The values of P may be the (estimated) probability of voxels being inside, outside and at the edge of the organ of interest. For each voxel, P may be obtained by computing three (or more) probabilities of belonging to tissue classes of interest, such as: voxels inside the organ (tissue class w1), voxels outside the organ (tissue class w2). Examiner interprets w1 to be the positive group.
and voxels in the negative group are excluded from the bowel segment model. [0123] In the probability-mapping technique, a probability map (P) is generated using a 3D image…The values of P may be the (estimated) probability of voxels being inside, outside and at the edge of the organ of interest. For each voxel, P may be obtained by computing three (or more) probabilities of belonging to tissue classes of interest, such as: voxels inside the organ (tissue class w1), voxels outside the organ (tissue class w2). Examiner interprets w2 to be the negative group.
Claim 3. Yee further teaches wherein the bowel segment model represents a segment of abnormal bowel. [0169] process the CTC case images to extract a 3D model of the colon lumen and bookmark polyp candidates Examiner interprets polyps to be abnormal.
Claim 4. Yee further teaches wherein the patient imaging comprises at least one of a magnetic resonance image (MRI), an ultrasound, or a computer tomography (CT) image. [0070] During operation, data engine 110 may receive input data (such as a computed-tomography or CT scan, histology, an ultrasound image, a magnetic resonance imaging or MRI scan, or another type of 2D image slice depicting volumetric information)
Claim 6. Yee further teaches wherein the lumen indicator comprises a three-dimensional centerline of the lumen. [0184] a centerline that runs through the length of the colon may be extracted
[0186] Once a centerline has been extracted, a centerline curvature evaluation may be performed in order to identify points along the centerline that can define bounds of colon segments that have an approximate linear tubular shape.
Claim 17. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale. Also Yee [0016], claim 13 teaches computer-program product and non-transitory computer-readable storage medium
Claim 18. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale.
Claim 19. Reviewed and analyzed in the same way as claim 4. See the above analysis and rationale.
Claim(s) 12-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gadolinium-Free Crohn’s Disease Assessment From Magnetic Resonance Enterography Data to Ziselman et al., hereinafter, “Ziselman”.
Claim 12. Ziselman teaches A method for training an artificial intelligence (AI) model to segment portions of a bowel of a patient comprising: [Abstract] We present a machine-learning-based system to automatically segment the TI and classify wall-thickening from T2-weighted MRE data. We introduce an anatomically constrained U-net segmentation model
receiving, by a processing element, patient imaging data associated with a lumen; [Abstract] a machine-learning-based system to automatically segment the terminal-ileum (TI) and classify wall-thickening from T2-weighted MRE data. Examiner understands the terminal-ileum to be lumen.
receiving, by the processing element, a lumen indicator configured to mark a portion of the lumen; [Fig.1] TI segmentation and ROI extraction
receiving, by the processing element, a segmentation data based on the lumen indicator, wherein the patient imaging data, the lumen indicator, and the segmentation data comprise training data; Ziselman [2.1. Data] We used data from the ImageKids study [15]… Thus only T2 weighted imaging series were selected and used for training the segmentation model
Ziselman [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI
providing the training data to an artificial intelligence algorithm executed by the processing element; [Abstract] We introduce an anatomically constrained U-net segmentation model
training, by the processing element, the artificial intelligence algorithm using the training data to learn a correlation between the lumen indicator and the segmentation data associated with the lumen within the patient imaging; Ziselman [2.2. Segmentation, 3rd paragraph]… We integrated an anatomical context prior by providing the model with the marginal distribution of all TI segmentations. The addition of the prior allows the model to distinguish between structures that are similar to the target and the actual target based on their anatomical context. We used the empirical distribution of the TI’s segmentation as present in the training data as the anatomical prior…
determining, by the processing element, a bowel segment model based on the training data; Ziselman [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI
and evaluating the bowel segment model based on a validation data. Ziselman [Abstract] We evaluated the added-value of our anatomically-constrained segmentation model …using k-fold cross-validation experimental setup
Claim 13. Ziselman further teaches wherein evaluating the bowel segment model comprises at least one of comparing the bowel segment model to the segmentation data. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance (17.931mm vs. 19.97mm, p<0.05).
[Introduction] improves the DNN ability to generalize the challenging TI segmentation task from MRE compared to baseline vanilla DNN-based segmentation models
Claim 14. Ziselman further teaches wherein the segmentation data comprises manual segmentation data determined by a medical provider. [2.2. Segmentation] this region can be obtained by a radiologist by simply drawing a bounding box around the suspected region or by using an additional rough detection model
Claim 15. Ziselman further teaches wherein the comparison of the bowel segment model to the segmentation data comprises at least one of a Dice score, a symmetric Hausdorff distance, a mean contour distance, a volume, or a length normalized volume. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5, 7-11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2016/01630248 A1 to Yee et al., hereinafter, “Yee” in view of Gadolinium-Free Crohn’s Disease Assessment From Magnetic Resonance Enterography Data to Ziselman et al., hereinafter, “Ziselman”.
Claim 5. Yee fails to explicitly teach a noncontrast T2-weighted MRI image. Ziselman, in the same field of analyzing medical images of the bowel using machine learning teaches of wherein the MRI comprises a noncontrast T2-weighted MRI image. [Title] GADOLINIUM-FREE CROHN’S DISEASE ASSESSMENT FROM MAGNETIC RESONANCE ENTEROGRAPHY DATA
[Abstract] terminal-ileum (TI) wall-thickening is challenging to estimate, especially from gadolinium-free T2-weighted MRI data. Examiner understands gadolinium-free to be non-contrast.
[Abstract] a machine-learning-based system to automatically segment the terminal-ileum (TI) and classify wall-thickening from T2-weighted MRE data. Examiner understands the terminal-ileum to be lumen.
Yee teaches the computer system segments the colon into subsegments based on an articulated object model that fits a tortuosity of the colon along a centerline of the colon, where the articulated object model. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Yee with the teachings of Ziselman [Introduction] improves segmentation results of the gastrointestinal tract.
Claim 7. Yee fails to explicitly teach receiving segmentation data. Ziselman, in the same field of analyzing medical images of the bowel using machine learning teaches further teaches further comprising: receiving segmentation data; [Abstract] We introduce an anatomically constrained U-net segmentation model… We evaluated the added-value of our anatomically-constrained segmentation model
[2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI, one of the most common segments of the bowel at risk of damage in CD patients
and evaluating the bowel segment model based on the segmentation data. [Abstract] The anatomically-constrained segmentation model improved segmentation results
Yee teaches the computer system segments the colon into subsegments based on an articulated object model that fits a tortuosity of the colon along a centerline of the colon, where the articulated object model. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Yee with the teachings of Ziselman [Introduction] improves segmentation results of the gastrointestinal tract.
Claim 8. Ziselman further teaches wherein evaluating the bowel segment model comprises at least one of comparing the bowel segment model to the segmentation data. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance (17.931mm vs. 19.97mm, p<0.05).
[Introduction] improves the DNN ability to generalize the challenging TI segmentation task from MRE compared to baseline vanilla DNN-based segmentation models
Claim 9. Ziselman further teaches wherein the comparison of the bowel segment model to the segmentation data comprises at least one of a Dice score, a symmetric Hausdorff distance, a mean contour distance, a volume, or a length normalized volume. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance (17.931mm vs. 19.97mm, p<0.05).
Claim 10. Yee further teaches wherein the length normalized volume is based at least in part on a length of the lumen indicator. [FIG. 10] , [0013-0014]
Claim 11. Ziselman further teaches wherein the segmentation data comprises manual segmentation data determined by a medical provider. [2.2. Segmentation] this region can be obtained by a radiologist by simply drawing a bounding box around the suspected region or by using an additional rough detection model
Claim 20. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gadolinium-Free Crohn’s Disease Assessment From Magnetic Resonance Enterography Data to Ziselman et al., hereinafter, “Ziselman” and in view of US 2016/01630248 A1 to Yee et al., hereinafter, “Yee”.
Claim 16. Ziselman further teaches wherein the patient imaging comprises one or more voxels, [2.3. Classification] The voxels containing the TI were extracted using the segmentations predicted by the RU model.
and determining the bowel segment model comprises: [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI
Ziselman fails to explicitly teach one or more voxels within a distance of the lumen indicator as one or more feature vectors. Yee, in the same field of analyzing medical images of the bowel teaches using machine learning teaches representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; Yee [0181] each voxel looks at the neighboring voxels above and below its current y position… to complete the colon lumen, [0182]
Yee [0188] given a centerline and for any point on the centerline, define both a forward direction vector using this point, and its next neighbor point, and a backward direction vector using this point …Examiner understands the centerline to be the centerline of the colon lumen.
Yee [0213] Table 4 presents values of the pre-specified distance from the centerline (D) in different segments of the colon… Examiner interprets values to be feature vectors.
generating, based on the one or more feature vectors, a cluster comprising at least one of the one or move voxels; Yee [0071] … identify 3D objects in the data (such as the colon and, more generally, groups of voxels).
Yee [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values... Examiner interprets voxel values to be feature vectors.
binarizing the cluster into one or more groups based on a threshold value; Yee [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values..., [0123]
and generating a bowel at segment model based least on the cluster. Yee [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values...
Yee [0125] using the organ binary mask a ray-casting technique can be applied to generate a volume image of the organ of interest, such as the liver or another solid organ. Examiner understands the organ binary mask to be the binary image defining the shape of organ and understands the organ of interest or another solid organ to be the lumen (bowel).
Ziselman introduce an anatomically constrained DNN-based approach for automatic TI segmentation and clinical features extraction to reduce the workload of MARIAs calculation from MRE data. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Ziselman with the teachings of Yee [0247] so true 3D-CTC can be used to reduce interpretation time.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DELOMIA L GILLIARD whose telephone number is (571)272-1681. The examiner can normally be reached 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DELOMIA L GILLIARD/Primary Examiner, Art Unit 2661