DETAILED ACTION
Claims 8 and 10 have been cancelled.
Claims 1-7, 9 and 11-20 are currently pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/25 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1-7, 9 and 11-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-7, 9 and 11-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the whole spine” in lines 10 and 11, “the 3-D model” in line 11 and “the 3-D bounding boxes” in line 12. There is insufficient antecedent basis for this limitation in the claim. Claims 15 and 18 contain similar language and are likewise rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 6, 7, 9, 11-13, 15, 17, 18 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shi et al. Publication Automatic Localization and Segmentation of Vertebral Bodies in 3D CT Volumes with Deep Learning (hereafter “Shi”).
Referring to claim 1, Shi discloses a system configured to detect vertebrae of a spine in volumetric image data, comprising:
a computing apparatus, comprising:
a memory including instructions for a vertebrae detection module (paragraph 42, We build a two-step algorithm using two deep learning models to automatically localize and segment vertebral bodies without specifying their types in CT volumes of arbitrary field-of-views);
a processor configured to execute the instructions to perform a two stage vertebrae detection in which: (i) a first set of bounding boxes for the vertebrae are detected in sagittal images and clustered in the volumetric image data in a first stage of the two stage vertebrae detection (page 43, We trained the 2D U-net using slices that only contained complete vertebral bodies and their labels. However, during testing the model was able to segment out the front spinal region for slices with partial vertebral body and partial intervertebral disc, as well as slices with whole intervertebral disc [FIG. 4 on page 45 shows bounding boxes around the vertebrae]), (ii) a panoramic image of the spine is generated based on the sagittal images and the detected first set of bounding boxes (page 44, Inputting a CT volume in the 2D U-net slice by slice, we can obtain a 3D mask of frontal spinal region. By summing the 3D mask along sagittal axis, we obtained a coronal projection image of the vertebra mask and the middle points of the nonzero pixels on each row formed the spinal centerline on the coronal view. Using the same projection method, we found the spine centerline along the sagittal view), wherein the panoramic image is a quasi-sagittal straightened 2-D view containing the whole spine aligned vertically and generated by sampling the 3-D model along a curve through centers of the 3-D bounding boxes, including sampling at each point along a projection of a vector from a front of the spine to a back of the spine onto a plane perpendicular to the curve (page 44, The centers of intervertebral discs then help to locate the centers of vertebral bodies. Putting the coordinates of the vertebra centers on z-axis back into the coronal and sagittal centerline models, we had all the 3d coordinates of vertebral centers), and (iii) a second set of bounding boxes for the vertebrae are detected in the generated panoramic image in a second stage of the two stage vertebrae detection (page 45-46, Finally, the 3D U-net segments the vertebral bodies in the 3D ROIs generated from the center coordinates of corresponding vertebral bodies); .and
a display configured to display a 2-D image from the volumetric image data of a detected vertebra (page 42, Segmenting vertebral bodies in volumetric medical image such as CT and MRI have many clinical utilities, such as shape analysis, surgery navigation) (page 45, Besides, our two U-net models contain less than 400 million parameters in total that occupy less than 10 megabytes and thus are easy to deploy for most devices in clinical settings).
Referring to claim 2, Shi discloses wherein the vertebrae detection module includes a neural network trained to detect vertebrae (page 43, U-net and its variants have been successful in segmenting many organs and lesions in CT and MRIs, such as heart, liver, lung nodule, and brain tumors. U-net is an end-to-end convolutional neural network that can classify each voxel of an image as either background or the target and thus directly output segmentation result of the same size as the input image)
Referring to claim 3, Shi discloses wherein the vertebrae detection module detects the first set of bounding boxes based on a first predetermined confidence level and generates 2-D bounding boxes for detected vertebrae (page 45, Tested on three unknown thoracic CT volumes, our vertebra localization methods could identify 92% vertebral bodies).
Referring to claim 6, Shi discloses wherein the vertebrae detection module labels each vertebra of the first set of bounding boxes as Sacrum, C2 or other vertebra (page 45, Specifically, we only trained our 2D U-net with slices covering T1 to L3, but it can also predict the front spinal region of L3-L4 in an unseen CT volume).
Referring to claim 7, Shi discloses wherein the vertebrae detection module combines the sagittal images and the 2-D bounding boxes to generate a 3-D model with 3-D bounding boxes (page 44, The centers of intervertebral discs then help to locate the centers of vertebral bodies. Putting the coordinates of the vertebra centers on z-axis back into the coronal and sagittal centerline models, we had all the 3d coordinates of vertebral centers).
Referring to claim 9, Shi discloses where the vertebrae detection module extrapolates the curve before a first vertebra and after a last vertebra to add missing vertebrae (page 44, The coronal spine centerline provided indexes with which we sampled the CT volume to form a sagittal image that cut right through the coronal middle of the spine and served as the base image for intensity extraction in later steps. Then we could map the intensity curve along the sagittal spine centerline that follow a regular pattern, which could be used to determine the z-axis coordinates of vertebra centers).
It is noted that the claim limitation “to add missing vertebrae” is a recitation of intended use. A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim.
Referring to claim 11, Shi discloses wherein the vertebrae detection module detects the second set of bounding boxes based on a second predetermined confidence level and generates 2-D bounding boxes for the detected vertebrae (page 45, For our 3D U-net segmentation model, we need to maximize the dice coefficient and with training the coefficient reached 0.88 for training dataset and 0.86 for validation dataset).
Referring to claim 12, Shi discloses where the vertebrae detection module translates the 2-D bounding boxes for the panoramic image to 3-D space to define 3-D bounding boxes for the vertebrae (page 44, The centers of intervertebral discs then help to locate the centers of vertebral bodies. Putting the coordinates of the vertebra centers on z-axis back into the coronal and sagittal centerline models, we had all the 3d coordinates of vertebral centers).
Referring to claim 13, Shi discloses wherein the vertebrae detection module labels each vertebra of the second set of bounding boxes as Sacrum, C2 or other vertebra (page 45, Specifically, we only trained our 2D U-net with slices covering T1 to L3, but it can also predict the front spinal region of L3-L4 in an unseen CT volume).
Referring to claims 15 and 18, Shi discloses a computer-implemented method for detecting vertebrae of a spine in volumetric image data, comprising:
extracting a first set of bounding boxes for vertebrae in sagittal images of the spine (page 43, We trained the 2D U-net using slices that only contained complete vertebral bodies and their labels. However, during testing the model was able to segment out the front spinal region for slices with partial vertebral body and partial intervertebral disc, as well as slices with whole intervertebral disc [FIG. 4 on page 45 shows bounding boxes around the vertebrae]);
generating a panoramic image of the spine based on the sagittal images and the detected first set of bounding boxes (page 44, Inputting a CT volume in the 2D U-net slice by slice, we can obtain a 3D mask of frontal spinal region. By summing the 3D mask along sagittal axis, we obtained a coronal projection image of the vertebra mask and the middle points of the nonzero pixels on each row formed the spinal centerline on the coronal view. Using the same projection method, we found the spine centerline along the sagittal view), wherein the panoramic image is a quasi-sagittal straightened 2-D view containing the whole spine aligned vertically and generated by sampling the 3-D model along a curve through centers of the 3-D bounding boxes, including sampling at each point along a projection of a vector from a front of the spine to a back of the spine onto a plane perpendicular to the curve (page 44, The centers of intervertebral discs then help to locate the centers of vertebral bodies. Putting the coordinates of the vertebra centers on z-axis back into the coronal and sagittal centerline models, we had all the 3d coordinates of vertebral centers); and
extracting a second set of bounding boxes for the vertebrae in the generated panoramic image (page 45-46, Finally, the 3D U-net segments the vertebral bodies in the 3D ROIs generated from the center coordinates of corresponding vertebral bodies).
Referring to claims 17 and 20, Shi discloses
detecting the second set of 2-D bounding boxes in the panoramic image based on a second predetermined confidence level (page 45, For our 3D U-net segmentation model, we need to maximize the dice coefficient and with training the coefficient reached 0.88 for training dataset and 0.86 for validation dataset);
translating the 2-D bounding boxes in the panoramic image to 3-D space to define 3-D bounding boxes for the vertebrae (page 44, The centers of intervertebral discs then help to locate the centers of vertebral bodies. Putting the coordinates of the vertebra centers on z-axis back into the coronal and sagittal centerline models, we had all the 3d coordinates of vertebral centers); and
annotating the vertebrae of the 3-D bounding boxes (page 45, Tested on three unknown thoracic CT volumes, our vertebra localization methods could identify 92% vertebral bodies).
Claims 4, 5, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Shi et al. Publication Automatic Localization and Segmentation of Vertebral Bodies in 3D CT Volumes with Deep Learning as applied to claim 1 above, and further in view of Nir et al. US Publication 2021/0056362 (hereafter “Nir”) and Bystrov et al. US Publication 2011/0280461 (hereafter “Bystrov”).
Referring to claim 4, Shi discloses wherein the vertebrae detection module detects the first set of bounding boxes beginning with an image of the sagittal images and moving towards a first image of the sagittal images and a last image of the sagittal images (page 44, Inputting a CT volume in the 2D U-net slice by slice, we can obtain a 3D mask of frontal spinal region. By summing the 3D mask along sagittal axis, we obtained a coronal projection image of the vertebra mask and the middle points of the nonzero pixels on each row formed the spinal centerline on the coronal view. Using the same projection method, we found the spine centerline along the sagittal view).
Shi does not disclose expressly detecting bounding boxes until stopping criteria is satisfied.
Nir detecting bounding boxes until stopping criteria is satisfied (paragraph 84, The stopping criteria is either no more bounding boxes or when the subframe is simply too small).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to detect bounding boxes until stopping criteria is satisfied. The motivation for doing so would have been to prevent expending processing power when further detection of bounding boxes is no longer likely.
While Shi detecting the first set of bounding boxes in a first to last image of the sagittal images, Shi does not disclose starting with a central image and moving in both directions towards a first image and a last image.
Bystrov discloses starting with a central image and moving in both directions towards a first image and a last image (paragraph 21, It is to be appreciated that the ROI can be located nearer a middle of the image sequence and propagated in both directions through image sequence).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to start with a central image and move in both directions towards a first image and a last image. The motivation for doing so would have been to allow the system to process images in the order in which they are received if not all of the images are ready to be processed. Further, there is a finite number of identified, predictable potential solutions to establishing an order of processing the sagittal images and it would have been obvious to try starting with a central image and moving in both directions towards a first image and a last image. Therefore, it would have been obvious to combine Dutta with Shi to obtain the invention as specified in claim 4.
Referring to claim 5, Nir discloses wherein the stopping criteria includes a predetermined number of consecutive images of the sagittal images in which no vertebra is detected (paragraph 84, The stopping criteria is either no more bounding boxes or when the subframe is simply too small).
Referring to claims 16 and 19, Shi discloses wherein extracting the first set of bounding boxes includes:
detecting the first set of bounding boxes (page 43, We trained the 2D U-net using slices that only contained complete vertebral bodies and their labels. However, during testing the model was able to segment out the front spinal region for slices with partial vertebral body and partial intervertebral disc, as well as slices with whole intervertebral disc [FIG. 4 on page 45 shows bounding boxes around the vertebrae]);
generating 2-D bounding boxes for the detected vertebrae;
identifying centers of the 2-D bounding boxes (page 44, The centers of intervertebral discs then help to locate the centers of vertebral bodies. Putting the coordinates of the vertebra centers on z-axis back into the coronal and sagittal centerline models, we had all the 3d coordinates of vertebral centers); and
annotating the vertebrae of the 2-D bounding boxes (page 45, Tested on three unknown thoracic CT volumes, our vertebra localization methods could identify 92% vertebral bodies).
Shi does not disclose expressly detecting bounding boxes until stopping criteria is satisfied.
Nir detecting terminating detection in response to a predetermined number of consecutive sagittal images having no vertebra (paragraph 84, The stopping criteria is either no more bounding boxes or when the subframe is simply too small).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to detect bounding boxes until stopping criteria is satisfied. The motivation for doing so would have been to prevent expending processing power when further detection of bounding boxes is no longer likely.
While Shi detecting the first set of bounding boxes in a first to last image of the sagittal images, Shi does not disclose starting with a central image and moving in both directions towards a first image and a last image.
Bystrov discloses detecting the first set of bounding boxes beginning with a center image of the sagittal images and moving outward to a first image of the sagittal images and a last image of the sagittal images (paragraph 21, It is to be appreciated that the ROI can be located nearer a middle of the image sequence and propagated in both directions through image sequence).
Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to start with a central image and move in both directions towards a first image and a last image. The motivation for doing so would have been to allow the system to process images in the order in which they are received if not all of the images are ready to be processed. Further, there is a finite number of identified, predictable potential solutions to establishing an order of processing the sagittal images and it would have been obvious to try starting with a central image and moving in both directions towards a first image and a last image. Therefore, it would have been obvious to combine Nir, Bystrov with Shi to obtain the invention as specified in claims 16 and 19.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Shi et al. Publication Automatic Localization and Segmentation of Vertebral Bodies in 3D CT Volumes with Deep Learning as applied to claim 1 above, and further in view of well known prior art.
Referring to claim 14, Shi discloses the computing apparatus, but does not disclose expressly where the computing apparatus is a picture archiving communication system.
Official Notice is taken that it is well known and obvious within the art for a computing apparatus to be a picture archiving communication system (See MPEP 2144.03). The motivation for doing so would have been to maintain a database of images in order to view the images for medical purposes. Therefore, it would have been obvious to combine well known prior art with Shi to obtain the invention as specified in claim 14.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chinese Patent 111563880A
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER K HUNTSINGER/Primary Examiner, Art Unit 2682