DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Restriction is required under 35 U.S.C. 121 and 372.
This application contains the following inventions or groups of inventions which are not so linked as to form a single general inventive concept under PCT Rule 13.1.
In accordance with 37 CFR 1.499, applicant is required, in reply to this action, to elect a single invention to which the claims must be restricted.
Group I, claim(s) 1-8,19-20, drawn to “A method for image segmentation using at least one segmentation model corresponding to the segmentation template”.
Group II, claim(s) 9-18, drawn to “A method for training sample labelling comprising: obtaining a plurality of first labelled samples and a plurality of un-labelled samples”.
The groups of inventions listed above do not relate to a single general inventive concept under PCT Rule 13.1 because, under PCT Rule 13.2, they lack the same or corresponding special technical features for the following reasons: “segmenting one or more target portions corresponding to the one or more ROIs from the target image using at least one segmentation model corresponding to the segmentation template” is a special technical feature and “training sample labelling comprising: obtaining a plurality of first labelled samples and a plurality of un-labelled samples; generating an image processing model and at least one validation model using the plurality of first labelled samples; and labeling the plurality of un-labelled samples to generate a plurality of second labelled samples based on the image processing model and the at least one validation model”. Unity does not exist between the groups.
A telephone call was made to Yangzhou Du on 3/31/2026 to request an oral election to the above restriction requirement, a provisional election was made with traverse to prosecute the invention of “segmenting one or more target portions corresponding to the one or more ROIs from the target image using at least one segmentation model corresponding to the segmentation template”, claim1-8,19-20 Affirmation of this election must be made by applicant in replying to this Office action. Claims 9-18 withdrawn from further consideration by the examiner, 37 CFR 1.142(b), as being drawn to a non-elected invention.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8,19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Han et al (US 2020/0272841) in view of Tang et al (US 2021/0248747)
As to claim 1, Han et al teaches the method for image segmentation, comprising:
determining, from a target image of a subject, a segmentation range ( a target image refers to an image that includes an ROI to be segmented. An ROI of the target image refers to a portion in the target image that represents a physical region of interest of an object. The object may include a biological object and/or a non-biological object. For example, the target image may be an image of a patient, and the ROI of the target image may be a specific organ, a specific tissue, or the whole body of the patient. Merely by way of example, the ROI may include the head, the chest, a lung, the heart, the liver, the spleen, the pleura, the mediastinum, the abdomen, the large intestine, the small intestine, the bladder, the gallbladder, the pelvis, the spine, the skeleton, blood vessels, or the like, or any combination thereof, of a patient. In some embodiments, the ROI may include a left kidney and/or a right kidney of the patient, paragraph [0087])
determining a segmentation template corresponding to the segmentation range, wherein the segmentation template (the processing device 140A (e.g., the segmentation module 420) may segment the ROI from the preprocessed preliminary region by applying the second ROI segmentation model to the preprocessed preliminary region, paragraph [0116])) includes a list of one or more regions of interest (ROIs) of the subject in the segmentation range( the processing device 140A (e.g., the segmentation module 420) may segment the ROI from the preprocessed preliminary region by applying the second ROI segmentation model to the preprocessed preliminary region; the ROI may include a plurality of sub-ROIs. Merely by way of example, the ROI may include a left kidney and a right kidney, paragraph [0109-0110])
While Han et al. teaches the meets a number of the limitations of the claimed invention, as pointed out more fully above, Han et al. fails to specifically teach “ segmenting one or more target portions corresponding to the one or more ROIs from the target image using at least one segmentation model corresponding to the segmentation template”.
Specifically, Tang et al. teaches OAR Segmentation Network is provided with upsample blocks that utilize extracted image features to segment anatomical structures in the form of a contour map. The OAR Segmentation Network takes the outputs of the OAR Detection Network as inputs and utilizes these inputs as a guide, only segmenting OARs from the predicted bounded regions of the first stage. The OAR Detection Network derives a fine-scale segmentation of each OAR and outputs predicted masks of all OARs in an image at once ( paragraph [0010]).Hao clearly teaches in FIG. 3, the detected OAR candidate bounding boxes 316 in the feature_map_8 315 Region of Interest (ROI) may further undergo a 3D ROI-pooling step 317 to generate feature maps of fixed sizes and classify the OAR candidates. As shown in FIG. 3, the ROI-pooling step 317 is applied to image features extracted from the feature_map_8 315 ROI in regions specified by each predicted bounding box 316. The ROI-pooling step 317 produces a feature map with fixed dimensions by reducing the spatial size of the representation, denoted by “64×6×6×6” as shown. As an example, two fully connected (FC) layers 318 may be subsequently applied to classify each OAR prediction into one of 29 classes (28 possible OARs plus one background) and to further regress coordinates and size offsets of each bounding box 316 (paragraph [0045]). Additionally, Hao teaches FIGS. 19A-19B, a 3D CT image 1910 may be received by the region detection model (1603) to separate the image into the distinct head and neck 1901, thorax 1994 and abdomen 1995 regions, as an example. As described above, the region detection model may separate the regions in the CT image 1910 by first attempting to segment lungs 1904 from the image 1910. As described above, the region detection model successfully segments lungs if an upper cutoff point and a lower cutoff point are located in the 3D segmentation mask ( paragraph [0110]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the region detection model of Hao et al in to automatically delineate organs at risk for radiation therapy. Therefore, the claimed invention would have been obvious to one of ordinary skill in the art at the time of the invention by applicant.
As to claim 2, Tang et al. teaches the method of claim 1, wherein the one or more ROIs include a plurality of ROIs, and in the segmentation template, the plurality of ROIs are classified into a first classification corresponding to targets and a second classification corresponding to organs-at-risk (As shown in FIG. 3, the ROI-pooling step 317 is applied to image features extracted from the feature_map_8 315 ROI in regions specified by each predicted bounding box 316. The ROI-pooling step 317 produces a feature map with fixed dimensions by reducing the spatial size of the representation, denoted by “64×6×6×6” as shown. As an example, two fully connected (FC) layers 318 may be subsequently applied to classify each OAR prediction into one of 29 classes (28 possible OARs plus one background) and to further regress coordinates and size offsets of each bounding box 316, paragraph[0045]).
As to claim 3, Tang et al. teaches the method of claim 1, wherein the determining, from a target image of a subject, a segmentation range includes: determining, based on the target image, an actual size of each of at least one scanned part of the subject (a CT image 210 may be passed into the OAR Detection Network 209 as input to locate a rough estimate of the location and size of each OAR in the CT image 210, paragraph [0041]); and determining, based on the actual size of each of the at least one scanned part, the segmentation range (Each anchor k represents a predefined bounding box size, for example a bounding box of (30, 30, 30) denoting (depth, height, width) voxels. Such an anchor serves as the OAR delineation framework's initial guess that an OAR may exist inside the region defined by the anchor's location and size. The OAR Detection Network then classifies these initial guesses (i.e., anchors) as to whether or not they contain an OAR, as an example. Bounding boxes 229 define each of these OAR candidates, which is of a particular size and at a particular location, and the binary classification 230 defines an OAR designation for each OAR candidate, paragraph [0044]).
As to claim 4, Han et al teaches the method of claim 3, wherein determining, based on the target image, an actual size of each of at least one scanned part of the subject (the processing device 140A may determine a region in the target image corresponding the segmented ROI in the preprocessed target image based on, for example, the location, size, and/or area of the segmented ROI in the preprocessed target image, paragraph [0112]) includes:
obtaining a recognition model; for each of the at least one scanned part, determining a ratio of a first count corresponding to the scanned part to a second count by inputting the target image of the subject into the recognition model, the first count being a count of slices that belongs to the scanned part, the second count being a total count of slices in the target image; and for each of the at least one scanned part, determining the actual size of the scanned part based on the corresponding ratio( The model generation module 450 may be configured to generate the ROI segmentation model corresponding to the target image resolution by training the preliminary model using the at least one training image. For example, the model generation module 450 may train the preliminary model according to a machine learning algorithm. In some embodiments, the model generation module 450 may train the preliminary model by iteratively updating model parameter(s) of the preliminary model. More descriptions regarding the training of the preliminary model may be found elsewhere in the present disclosure, paragraph [083-0084])
As to claim 5, Tang et al teaches the method of claim 1, wherein the determining a segmentation template corresponding to the segmentation range includes: obtaining a plurality of segmentation templates, each of the plurality of segmentation templates corresponding to one of a plurality of reference segmentation ranges (multi-atlas segmentation, may segment OARs from cropped local patches by mapping to image templates, paragraph[0085]); and selecting, from the plurality of segmentation templates, the at least one segmentation template corresponding to the segmentation range ( figure 12A-12H).
As to claim 6, Tang et al teaches the method of claim 5, wherein the selecting, from the plurality of segmentation template, at least one segmentation template corresponding to the segmentation range includes: obtaining reference information regarding the subject; and selecting, based on the reference information, the at least one segmentation template corresponding to the segmentation range from the plurality of segmentation template( reference MRI images during the delineation process, paragraph [0089-0091]).
As to claim 7, Tang et al teaches the method of claim 1, wherein the method is implemented by a central processing unit (CPU) and a graphics processing unit (GPU) The DCNN disclosed herein may be implemented in a computer system and run on a commodity GPU in a hospital setting, paragraph [0039]), and the CPU is configured to obtain the target image; and the GPU is configured to perform operations including: determining, from the target image of the subject, the segmentation range; determining the segmentation template corresponding to the segmentation range, wherein the segmentation template includes the list of the one or more ROIs of the subject in the segmentation range (The DSC performance of the OAR delineation framework for delineation of each OAR can be shown in FIG. 13 under the column heading “U.sub.a-Net” 1308. As shown in FIG. 13, the DSC on the test data was calculated for the performances of multi-atlas segmentation (MAS), AnatomyNet and oncologists (Human and Humana). In order to ensure consistent comparisons, the models were trained on the same training datasets using the same procedure, paragraph[0013]); and segmenting the one or more target portions corresponding to the one or more ROIs from the target image using the at least one segmentation model corresponding to the segmentation template (FIG. 13 is a table summarizing a Dice similarity coefficient comparison of the two-stage deep learning OAR delineation framework with current delineation methods, according to an aspect. As discussed above, the OAR Segmentation Network (shown by 840 in FIG. 8) results are represented by 28 binary masks, one for each OAR, as an example. Each binary mask is a 3D array of the same size as the input 3D medical images, with values of either 0 (zero) or 1 (one), indicating whether the underlying voxel is part of the corresponding OAR, paragraph [0086]).
As to claim 8, Tang et al teaches the method of claim 1, wherein the at least one segmentation model is obtained by training an initial model using a plurality of labelled samples, wherein the plurality of labelled samples are obtained according to a process including: obtaining a plurality of pre-labelled samples and a plurality of un-labelled samples; generating the at least one segmentation model and at least one validation model using the plurality of pre-labelled samples; and labelling the plurality of un-labelled samples to generate the plurality of labelled samples based on the segmentation model and the at least one validation model ( figure 13 and paragraph [0086]).
The limitation of claim 19-20 has been addressed above.
Contact information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY BITAR whose telephone number is (571)270-1041. The examiner can normally be reached Mon-Friday from 8:00 am to 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mrs. Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
NANCY . BITAR
Examiner
Art Unit 2664
/NANCY BITAR/Primary Examiner, Art Unit 2664