Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,573

BOWEL SEGMENTATION SYSTEM AND METHODS

Non-Final OA §102§103
Filed
Mar 08, 2024
Examiner
GILLIARD, DELOMIA L
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Motilent Limited
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
976 granted / 1089 resolved
+27.6% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
12 currently pending
Career history
1101
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1089 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 6 and 17-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2016/0163048 A1 to Yee et al., hereinafter, “Yee”. Claim 1. Yee teaches A method comprising: [Abstract] the computer system segments the colon into subsegments based on an articulated object model that fits a tortuosity of the colon along a centerline of the colon, where the articulated object model, FIG. 38 [0017] Another embodiment provides a method, which may be performed by the computer system. receiving patient imaging comprising one or more voxels; [0169] process the CTC case images to extract a 3D model of the colon lumen Examiner understands 3D to be voxels. determining a lumen indicator based on the patient imaging; [0071] identify landmarks or reference objects in the data (such as anatomical features) [0169] process the CTC case images to extract a 3D model of the colon lumen representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; [0181] each voxel looks at the neighboring voxels above and below its current y position… to complete the colon lumen, [0182] [0188] given a centerline and for any point on the centerline, define both a forward direction vector using this point, and its next neighbor point, and a backward direction vector using this point …Examiner understands the centerline to be the centerline of the colon lumen. [0213] Table 4 presents values of the pre-specified distance from the centerline (D) in different segments of the colon… Examiner interprets values to be feature vectors. generating, based on the one or more feature vectors, a cluster comprising at least one of the one or move voxels; [0071] … identify 3D objects in the data (such as the colon and, more generally, groups of voxels). [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values... Examiner interprets voxel values to be feature vectors. binarizing the cluster into one or more groups based on a threshold value; [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values..., [0123] and generating a bowel segment model based at least on the cluster. [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values... [0125] using the organ binary mask a ray-casting technique can be applied to generate a volume image of the organ of interest, such as the liver or another solid organ. Examiner understands the organ binary mask to be the binary image defining the shape of organ and understands the organ of interest or another solid organ to be the lumen (bowel). Claim 2. Yee further teaches wherein: the one or more groups comprises at least a positive group and a negative group; [0122-0123] In the probability-mapping technique, a probability map (P) is generated using a 3D image…The values of P may be the (estimated) probability of voxels being inside, outside and at the edge of the organ of interest. Examiner interprets voxels being inside to be positive and voxels being outside to be negative. voxels in the positive group are included in the bowel segment model; [0123] In the probability-mapping technique, a probability map (P) is generated using a 3D image…The values of P may be the (estimated) probability of voxels being inside, outside and at the edge of the organ of interest. For each voxel, P may be obtained by computing three (or more) probabilities of belonging to tissue classes of interest, such as: voxels inside the organ (tissue class w1), voxels outside the organ (tissue class w2). Examiner interprets w1 to be the positive group. and voxels in the negative group are excluded from the bowel segment model. [0123] In the probability-mapping technique, a probability map (P) is generated using a 3D image…The values of P may be the (estimated) probability of voxels being inside, outside and at the edge of the organ of interest. For each voxel, P may be obtained by computing three (or more) probabilities of belonging to tissue classes of interest, such as: voxels inside the organ (tissue class w1), voxels outside the organ (tissue class w2). Examiner interprets w2 to be the negative group. Claim 3. Yee further teaches wherein the bowel segment model represents a segment of abnormal bowel. [0169] process the CTC case images to extract a 3D model of the colon lumen and bookmark polyp candidates Examiner interprets polyps to be abnormal. Claim 4. Yee further teaches wherein the patient imaging comprises at least one of a magnetic resonance image (MRI), an ultrasound, or a computer tomography (CT) image. [0070] During operation, data engine 110 may receive input data (such as a computed-tomography or CT scan, histology, an ultrasound image, a magnetic resonance imaging or MRI scan, or another type of 2D image slice depicting volumetric information) Claim 6. Yee further teaches wherein the lumen indicator comprises a three-dimensional centerline of the lumen. [0184] a centerline that runs through the length of the colon may be extracted [0186] Once a centerline has been extracted, a centerline curvature evaluation may be performed in order to identify points along the centerline that can define bounds of colon segments that have an approximate linear tubular shape. Claim 17. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale. Also Yee [0016], claim 13 teaches computer-program product and non-transitory computer-readable storage medium Claim 18. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale. Claim 19. Reviewed and analyzed in the same way as claim 4. See the above analysis and rationale. Claim(s) 12-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gadolinium-Free Crohn’s Disease Assessment From Magnetic Resonance Enterography Data to Ziselman et al., hereinafter, “Ziselman”. Claim 12. Ziselman teaches A method for training an artificial intelligence (AI) model to segment portions of a bowel of a patient comprising: [Abstract] We present a machine-learning-based system to automatically segment the TI and classify wall-thickening from T2-weighted MRE data. We introduce an anatomically constrained U-net segmentation model receiving, by a processing element, patient imaging data associated with a lumen; [Abstract] a machine-learning-based system to automatically segment the terminal-ileum (TI) and classify wall-thickening from T2-weighted MRE data. Examiner understands the terminal-ileum to be lumen. receiving, by the processing element, a lumen indicator configured to mark a portion of the lumen; [Fig.1] TI segmentation and ROI extraction receiving, by the processing element, a segmentation data based on the lumen indicator, wherein the patient imaging data, the lumen indicator, and the segmentation data comprise training data; Ziselman [2.1. Data] We used data from the ImageKids study [15]… Thus only T2 weighted imaging series were selected and used for training the segmentation model Ziselman [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI providing the training data to an artificial intelligence algorithm executed by the processing element; [Abstract] We introduce an anatomically constrained U-net segmentation model training, by the processing element, the artificial intelligence algorithm using the training data to learn a correlation between the lumen indicator and the segmentation data associated with the lumen within the patient imaging; Ziselman [2.2. Segmentation, 3rd paragraph]… We integrated an anatomical context prior by providing the model with the marginal distribution of all TI segmentations. The addition of the prior allows the model to distinguish between structures that are similar to the target and the actual target based on their anatomical context. We used the empirical distribution of the TI’s segmentation as present in the training data as the anatomical prior… determining, by the processing element, a bowel segment model based on the training data; Ziselman [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI and evaluating the bowel segment model based on a validation data. Ziselman [Abstract] We evaluated the added-value of our anatomically-constrained segmentation model …using k-fold cross-validation experimental setup Claim 13. Ziselman further teaches wherein evaluating the bowel segment model comprises at least one of comparing the bowel segment model to the segmentation data. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance (17.931mm vs. 19.97mm, p<0.05). [Introduction] improves the DNN ability to generalize the challenging TI segmentation task from MRE compared to baseline vanilla DNN-based segmentation models Claim 14. Ziselman further teaches wherein the segmentation data comprises manual segmentation data determined by a medical provider. [2.2. Segmentation] this region can be obtained by a radiologist by simply drawing a bounding box around the suspected region or by using an additional rough detection model Claim 15. Ziselman further teaches wherein the comparison of the bowel segment model to the segmentation data comprises at least one of a Dice score, a symmetric Hausdorff distance, a mean contour distance, a volume, or a length normalized volume. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 5, 7-11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2016/01630248 A1 to Yee et al., hereinafter, “Yee” in view of Gadolinium-Free Crohn’s Disease Assessment From Magnetic Resonance Enterography Data to Ziselman et al., hereinafter, “Ziselman”. Claim 5. Yee fails to explicitly teach a noncontrast T2-weighted MRI image. Ziselman, in the same field of analyzing medical images of the bowel using machine learning teaches of wherein the MRI comprises a noncontrast T2-weighted MRI image. [Title] GADOLINIUM-FREE CROHN’S DISEASE ASSESSMENT FROM MAGNETIC RESONANCE ENTEROGRAPHY DATA [Abstract] terminal-ileum (TI) wall-thickening is challenging to estimate, especially from gadolinium-free T2-weighted MRI data. Examiner understands gadolinium-free to be non-contrast. [Abstract] a machine-learning-based system to automatically segment the terminal-ileum (TI) and classify wall-thickening from T2-weighted MRE data. Examiner understands the terminal-ileum to be lumen. Yee teaches the computer system segments the colon into subsegments based on an articulated object model that fits a tortuosity of the colon along a centerline of the colon, where the articulated object model. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Yee with the teachings of Ziselman [Introduction] improves segmentation results of the gastrointestinal tract. Claim 7. Yee fails to explicitly teach receiving segmentation data. Ziselman, in the same field of analyzing medical images of the bowel using machine learning teaches further teaches further comprising: receiving segmentation data; [Abstract] We introduce an anatomically constrained U-net segmentation model… We evaluated the added-value of our anatomically-constrained segmentation model [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI, one of the most common segments of the bowel at risk of damage in CD patients and evaluating the bowel segment model based on the segmentation data. [Abstract] The anatomically-constrained segmentation model improved segmentation results Yee teaches the computer system segments the colon into subsegments based on an articulated object model that fits a tortuosity of the colon along a centerline of the colon, where the articulated object model. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Yee with the teachings of Ziselman [Introduction] improves segmentation results of the gastrointestinal tract. Claim 8. Ziselman further teaches wherein evaluating the bowel segment model comprises at least one of comparing the bowel segment model to the segmentation data. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance (17.931mm vs. 19.97mm, p<0.05). [Introduction] improves the DNN ability to generalize the challenging TI segmentation task from MRE compared to baseline vanilla DNN-based segmentation models Claim 9. Ziselman further teaches wherein the comparison of the bowel segment model to the segmentation data comprises at least one of a Dice score, a symmetric Hausdorff distance, a mean contour distance, a volume, or a length normalized volume. [Abstract] The anatomically-constrained segmentation model improved segmentation results compared to a vanilla U-Net by means of both Dice score and Housdorff distance (17.931mm vs. 19.97mm, p<0.05). Claim 10. Yee further teaches wherein the length normalized volume is based at least in part on a length of the lumen indicator. [FIG. 10] , [0013-0014] Claim 11. Ziselman further teaches wherein the segmentation data comprises manual segmentation data determined by a medical provider. [2.2. Segmentation] this region can be obtained by a radiologist by simply drawing a bounding box around the suspected region or by using an additional rough detection model Claim 20. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gadolinium-Free Crohn’s Disease Assessment From Magnetic Resonance Enterography Data to Ziselman et al., hereinafter, “Ziselman” and in view of US 2016/01630248 A1 to Yee et al., hereinafter, “Yee”. Claim 16. Ziselman further teaches wherein the patient imaging comprises one or more voxels, [2.3. Classification] The voxels containing the TI were extracted using the segmentations predicted by the RU model. and determining the bowel segment model comprises: [2.2. Segmentation] We trained a 3D Unet model with residual connections (RU) [16] to map MRE volumes to 3D segmentations of the TI Ziselman fails to explicitly teach one or more voxels within a distance of the lumen indicator as one or more feature vectors. Yee, in the same field of analyzing medical images of the bowel teaches using machine learning teaches representing the one or more voxels within a distance of the lumen indicator as one or more feature vectors; Yee [0181] each voxel looks at the neighboring voxels above and below its current y position… to complete the colon lumen, [0182] Yee [0188] given a centerline and for any point on the centerline, define both a forward direction vector using this point, and its next neighbor point, and a backward direction vector using this point …Examiner understands the centerline to be the centerline of the colon lumen. Yee [0213] Table 4 presents values of the pre-specified distance from the centerline (D) in different segments of the colon… Examiner interprets values to be feature vectors. generating, based on the one or more feature vectors, a cluster comprising at least one of the one or move voxels; Yee [0071] … identify 3D objects in the data (such as the colon and, more generally, groups of voxels). Yee [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values... Examiner interprets voxel values to be feature vectors. binarizing the cluster into one or more groups based on a threshold value; Yee [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values..., [0123] and generating a bowel at segment model based least on the cluster. Yee [0122] the DICOM image object may be processed to identify different tissue classes (such as organ segments, vessels, etc.) as binary 3D collections of voxels based on the voxel values... Yee [0125] using the organ binary mask a ray-casting technique can be applied to generate a volume image of the organ of interest, such as the liver or another solid organ. Examiner understands the organ binary mask to be the binary image defining the shape of organ and understands the organ of interest or another solid organ to be the lumen (bowel). Ziselman introduce an anatomically constrained DNN-based approach for automatic TI segmentation and clinical features extraction to reduce the workload of MARIAs calculation from MRE data. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Ziselman with the teachings of Yee [0247] so true 3D-CTC can be used to reduce interpretation time. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DELOMIA L GILLIARD whose telephone number is (571)272-1681. The examiner can normally be reached 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DELOMIA L GILLIARD/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §102, §103
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602805
DATA TRANSMISSION THROTTLING AND DATA QUALITY UPDATING FOR A SLAM DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12602932
SYSTEMS AND METHODS FOR MONITORING USERS EXITING A VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602796
SYSTEM, DEVICE, AND METHODS FOR DETECTING AND OBTAINING INFORMATION ON OBJECTS IN A VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602952
IMAGE-BASED AUTOMATED ERGONOMIC RISK ROOT CAUSE AND SOLUTION IDENTIFICATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602895
MACHINE LEARNING-BASED DOCUMENT SPLITTING AND LABELING IN AN ELECTRONIC DOCUMENT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+10.2%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 1089 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month