Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d).
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 3/19/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 2 is objected to because of the following informalities: in claim 2, line 3, “HU” should be changed to --Hounsfield Unit (HU)--. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over CN-113362295-a, hereinafter referred to as “CN’295” in view of the NPL article to Liang et al “Contrastive Cross-Modal Pre-Training: A General Strategy for Small Sample Medical Imaging,” hereinafter referred to as “Liang et al.” Note that citations to CN’295 are based upon the supplied English translation of CN’295.
CN’295 discloses a liver Computed Tomography (CT) image segmentation system based on mixed supervised learning, the image segmentation system comprising: an image preprocessing unit (page 2, Step 2), a feature extraction unit (page 2, Step 3), a word vector segmentation unit (page 2, Step 5, where the labelled data set is split into multiple tiles, but note that CN’295 fails to teach the claimed word vector segmentation) and a single-layer convolutional classification unit (page 2, and the discussion of the “DenseNet network that employs a CNN), the image preprocessing unit being in data connection with the feature extraction unit (Step 1 to Step 2), and the feature extraction unit being respectively in data connection with the word vector segmentation unit and the single-layer convolutional classification unit (Step 3 to Step 5).
However, CN’295 fails to explicitly disclose a work vector segmentation unit during processing of the labelled images as well as also failing to disclose the classification unit being a single-layer.
In the same field of endeavor as CN’295 (medical image segmentation), Liang et al discloses in Section IID contextualized natural language encoding that employs word vectoring that creates a feature vector permits the absolute difference between the text and image feature vectors to be fed to a classification network that predicts whether the input image and input text are true pairs. As part of the processing, Liang et al also discloses in Figure 2 (bottom right 1x1 Conv layer) the use of a single layered convolutional unit that performs classification.
Therefore, it would have been obvious before the effective filing date of the claimed invention to have provided the image segmentation method of CN’295 with the addition of a word vector segmentation unit taught by Liang et al so as to permit the use of readily available textual imaging reports in order to reduce the need for costly and laborious labelled data by 67%-98% (See Abstract of Liang et al). In addition, the use of a single layer convolutional layer for classification taught by Liang would have also been obvious to one of ordinary skill in the art so as to simplify and reduce the number of independent tasks being performed during the segmentation process, thus easing the computation burden on the computing system.
As per claim 6, CN’295 in view of Liang et al further disclose segmentation in the algorithm comprising a testing stage and a training stage, the testing stage comprising: performing CT scanning to obtain an abdominal CT image to be segmented, splitting the CT image into single-frame two-dimension CT slices along a human body axis (CN’295 at page 2, Step 1) , sequentially inputting the single-frame two-dimension CT slices into the image preprocessing unit (Step 2), inputting a preprocessed image into the feature extraction unit (Step 3), and inputting output deep-level image features into the word vector segmentation unit to complete a liver pixel-level segmentation task (See Liang et al, Section III and Section IIIA). As explained above with claim 1, it would have been obvious to one of ordinary skill in the art to have employed the addition of a word vector segmentation unit so as to permit the use of readily available textual imaging reports in order to reduce the need for costly and laborious labelled data by 67%-98% (See Abstract of Liang et al).
As per claim 7, in the training stage, a CT scanning dataset is constructed, and a construction process of the dataset comprises: acquiring abdominal CT scanning data (CN’295, Step 1), splitting the CT scanning data into a series of two-dimension slice images along the human body axis (Step 1), and randomly selecting from the images classified into the foreground a part of images in a number much less than a total number of the images classified into the foreground to perform a pixel-level annotation to obtain strong labels (CN’295, Step 2 “mark some pictures in the slice data set to obtain a labeled data set {X} and an unlabeled data set {Y}.”; and a number of the strong labels and a number of foreground weak labels being respectively expressed as s and w (as relates to data set {X} and data set {Y}).
CN’295 fails to disclose the performing of manual classification on all slice images based on whether the slice images contain a liver to obtain weak labels, images with the liver being classified into a foreground, and images without the liver being classified into a background.
However, this manual “preselection” of sliced images by whether the particular slice contains a portion of the liver (foreground) or not (background) would have been obvious before the effective filing date of the claimed invention as doing so would “weed-out” the slices that do not contain the liver, thus reducing the overall number of images that need to be segmented, as well as reducing the time needed to process the data set (all other things being equal) since fewer image slices need processed.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over CN’295 in view of Liang et al as applied to claims 1 and 6-7 above, and further in view of US PGPub 2020/0202524 to Karki et al, hereinafter “Karki et al.”
With regard to claim 2, CH’295 discloses a liver segmentation process that takes slices of images and preprocesses the image slices, see CH’295, page 2, Step 2. Liang discloses a medical imaging method that uses unlabeled images along with corresponding medical records in order to train a network to interpret imagery (Fig. 2 of Liang et al discloses single layer convolutional classification unit). However, CN’295 in view of Liang et al fails to disclose the preprocessing of the image date including truncating a range of HU values in a CT slice image into [H1, H2], where H1 and H2 respectively represent a lower limit and an upper limit of rough HU values capable of preserving a liver tissue intact and removing a bone structure, and then scaling a size of the slice image to (H0, W0), where (H0, W0) represents a size of an input image of the feature extraction unit.
However, in the same field of endeavor (medical image segmentation), Karki et al discloses at [0002]-[0003] and [0022]-[0024] that raw medical images are preprocessed before being input into the trained model. This preprocessing involves the use of Hounsfield Unit (HU) windows that permit isolation of anatomy and lesions. The resulting grayscale image is input to the image feature extraction unit at a resolution that inherently matches the resolution of the feature extraction unit.
It would have been obvious before the effective filing date of the claimed invention to have provided HU windowing to the preprocessed medical slice images of CN’295 in view of Liang et al as taught by Karki et al as doing so would permit the incoming raw image slices to show only the anatomy or lesions that are targeted by the user (doctor, radiologist, etc) for the particular body part of the patient. Karki et al explains that in brain CT scans, limiting the HU values will permit an increase in textures being visible, or the lesion and skull being visible, all dependent upon the HU window selected.
Allowable Subject Matter
Claims 3-5 and 8-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The references disclosed set forth the general state of the art surrounding image segmentation of medical images based upon neural networks and the challenges faced with training models with limited amounts of labelled images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to whose telephone number is (571)272-7593. The examiner can normally be reached M-F, 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
DAVID OMETZ
Primary Examiner
Art Unit 2672
/DAVID OMETZ/Primary Examiner, Art Unit 2672