DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Statue of claims: claims 1-10 and 13-17 are examined below. Claims 11-12 are canceled.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/20/2023, 12/4/2024 and 5/29/2025 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because language of claim 16 recited “A program causing…” A review of the specification does not disclose that the “program” is a non-transitory computer readable medium. A further review of dependent claim 17 recited “non-transitory computer-readable” medium further show that claim 16 “program” is not non-transitory. Examiner advise amendment of rolling up language of claim 17 into claim 16 regarding replacing “program” with “non-transitory computer-readable” to overcome the rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 13-14 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over AKSELROD-BALLIN et al (US 2020/0395123) in view of Jarrard et al (US 2020/0202525):
Claim 1:
AKSELROD-BALLIN et al (US 2020/0395123) teach the following subject matter:
A feature extraction device (0075-0079 teaches device for outputting likelihood target issue of anatomical images of prostate, colon, breast...etc cancer) comprising:
an image processor that, when an image is input, calculates a likelihood of the inputted image belonging to a first image class by reducing a dimensionality (0171 teaches resizing for standardization (predetermined size) for images) of the input image using a image model with a network having a plurality of layers, and regards, as a feature parameter of the inputted image, an intermediate vector output from an intermediate layer of the plurality of layers (0047-0050 teaches neural network with intermediate layer for output of feature vectors (feature parameters extracted) of anatomical images for likelihood of malignancy target tissue of images inputted; paragraph 0003-0005 detail further intermediate layers; 0011 detail feature vector; 0033 teaches feature vector outputted by intermediate layer and fed into a classifier component of the model; 0090 teaches neural network with number of layers, where 0172-0173 detail DNN layer for feature extraction and layer for batch normalization for classification layer); and
a feature processor that, when an image group is input, inputs images included in the inputted image group into the image processor to calculate likelihoods and the feature parameters respectively (0012 detail computing plurality of anatomical images, selected from the group consisting of: likelihood of malignancy for the images, and maximum of the likelihood of malignancy of first and second images (image group input); 0033, 0035, 0048, 0059 detail anatomical image(s) (image group) processed by neural network for feature vector for likelihood of malignancy in the target tissue; 0168 detail neural network (DNN) that predict from feature),
a classification processor that, when target image group relate to a target and additional data related to the target are input, predict whether the target belongs to a first target class or not, from (0165-0168 teaches additional details such as data such as: gynecologic history (e.g., age at menarche, number of pregnancies, number of children, menopausal status), personal history of breast cancer, family history of breast and ovarian cancer, self-reported symptoms, and previous procedures. BI-RADS assessments were reported for DM and US separately; past BD and current recommendations were extracted from the radiologist's report, where 0168 detail factored into obtain the final value indicative of likelihood of malignancy, when running the final classification on the two tasks (cancer-positive biopsy prediction, normal identification); 0175 detail feature contribution analysis (additional data) such as family history);
feature information output from the feature processor by inputting the inputted target image group into the feature processor (0166-0168, where 0168 detail feature vector includes the entire set of clinical features, and image features extracted from the neural network component (e.g., DNN) for the two views (CC and MLO), as described herein), and
the inputted additional data using a classification model (0033 teaches feature vector outputted by intermediate layer and fed into a classifier component of the model).
AKSELROD-BALLIN et al teaches multiple images fed into neural network for feature vector to compute relationship for likelihood of malignancy in first, second images for selected image in paragraph 0097, but do not teach selects, based on the calculated likelihoods, a predetermined number of representative images from the inputted image group, and outputs, as feature information of the image group, a vector, a tensor, or an parameters calculated for the selected predetermined number of representative images.
Jarrard et al (US 2020/0202525) teaches:
selects, based on the calculated likelihoods, a predetermined number of representative images from the inputted image group, and outputs, as feature information of the image group, a vector, a tensor, or an parameters calculated for the selected predetermined number of representative images (0081 teaches selected image dataset for pathologic abnormalities associated with prostate cancer presence, where 0050 detail use for classification with machine learning for feature data regarding tumor grade).
AKSELROD-BALLIN et al and Jarrard et al are both in the field of image analysis, especially for specifically prostate cancer with machine learning using feature parameter/vector and classification such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify AKSELROD-BALLIN et al by Jarrard et al regarding selected set of image data such selected images used create a machine learning with precision above 95% as disclosed by Jarrard et al in 0081.
Claim 2:
AKSELROD-BALLIN et al teach:
Wherein an image included in the target image group related to the target belonging to the first image class corresponding to the target belong to the first target class (0011-0012 teaches plurality of images fed into neural network where 0012 detail first and second (image group) with likelihood for malignancy (first target class), where the target is likelihood target issue of anatomical images of prostate, colon, breast…cancer in paragraph 0075-0079).
Claim 3:
AKSELROD-BALLIN et al teach:
The feature extract device according to claim 1, where the target image group comprised, images obtained by dividin a photography captured from the target into a predetermined size (0171 teaches resizing for standardization (predetermined size) for images).
Claim 13:
AKSELROD-BALLIN et al teach:
The feature extract device according to claim 1, wherein the image is model related to a deep convolutional network (0032 teaches deep learning model; 0040 detail using deep neural network (DNN); 0047 detail the use of convolutional neural network for cancer classification; figure 6J and 0155-0157 teaches use of deep convolutional neural networks for analysis for cancer).
Claim 14:
AKSELROD-BALLIN et al teach:
The feature extract device according to claim 1, wherein the classification model is a model related to linear regression, logistic regression, ridge regression, lasso regression, or a support vector machine (0084 teaches classifier, for example, based on an implementation of a gradient boosting machine (GBM), logistic regression, support vector machine (SVM), neural network, or other architecture.).
Claim 15:
AKSELROD-BALLIN et al (US 2020/0395123) teaches:
A feature extraction method (figures 1-3 teaches method) comprising:
inputting a target image group related to a target and additional data related to the target into a feature extraction device (0011-0012 teaches feature vector further computed inputted images first and second (image group)…etc for a sum of likelihood of malignancy; 0097 detail multiple images fed into neural network for feature vector to compute relationship for likelihood of malignancy in first, second images…etc, and 0165-0168 teaches additional details such as data such as: gynecologic history (e.g., age at menarche, number of pregnancies, number of children, menopausal status), personal history of breast cancer, family history of breast and ovarian cancer, self-reported symptoms, and previous procedures. BI-RADS assessments were reported for DM and US separately; past BD and current recommendations were extracted from the radiologist's report, where 0168 detail factored into obtain the final value indicative of likelihood of malignancy, when running the final classification on the two tasks (cancer-positive biopsy prediction, normal identification); 0175 detail feature contribution analysis (additional data) such as family history);
calculating respectively, by the feature extraction device, likelihood of images included in the inputted target image group belonging to a first image class by reducing a dimensionality (0171 teaches resizing for standardization (predetermined size) for images) of the inputted image using with an image model with a network having a plurality of layers, and regarding respectively, as feature parameters of the images, intermediate vectors output from intermediate layer of the plurality of layers (0047-0050 teaches neural network with intermediate layer for output of feature vectors (feature parameters extracted) of anatomical images for likelihood of malignancy target tissue of images inputted; paragraph 0003-0005 detail further intermediate layers; 0011 detail feature vector; 0033 teaches feature vector outputted by intermediate layer and fed into a classifier component of the model; 0090 teaches neural network with number of layers, where 0172-0173 detail DNN layer for feature extraction and layer for batch normalization for classification layer):
outputting, by the feature extraction device and as feature information of the image group, a feature parameter a vector, a tensor, or an array obtained by arranging intermediate vectors regarded as the feature parameters calculated for the representative images (011-0012 teaches feature vector further computed inputted images first and second (image group)…etc for a sum of likelihood of malignancy; 0097 detail multiple images fed into neural network for feature vector to compute relationship for likelihood of malignancy in first, second images…etc); and predicting, by the feature extraction device, whether the target belongs to a first target class or not, from the output feature information (0165-0168 teaches additional details such as data such as: gynecologic history (e.g., age at menarche, number of pregnancies, number of children, menopausal status), personal history of breast cancer, family history of breast and ovarian cancer, self-reported symptoms, and previous procedures. BI-RADS assessments were reported for DM and US separately; past BD and current recommendations were extracted from the radiologist's report, where 0168 detail factored into obtain the final value indicative of likelihood of malignancy, when running the final classification on the two tasks (cancer-positive biopsy prediction, normal identification); 0175 detail feature contribution analysis (additional data) such as family history), and
the inputted additional data (0165-0168 teaches additional details such as data such as: gynecologic history (e.g., age at menarche, number of pregnancies, number of children, menopausal status), personal history of breast cancer, family history of breast and ovarian cancer, self-reported symptoms, and previous procedures)
by using a classification model (0033 teaches feature vector outputted by intermediate layer and fed into a classifier component of the model).
AKSELROD-BALLIN et al teaches multiple images fed into neural network for feature vector to compute relationship for likelihood of malignancy in first, second images for selected image in paragraph 0097, but do not teach selecting, by the feature extraction device and based on the calculated likelihoods, a predetermined number of representative images from the inputted target image group.
Jarrard et al (US 2020/0202525) teaches:
selecting, by the feature extraction device and based on the calculated likelihoods, a predetermined number of representative images from the inputted target image group (0081 teaches selected image dataset for pathologic abnormalities associated with prostate cancer presence, where 0050 detail use for classification with machine learning for feature data regarding tumor grade).
AKSELROD-BALLIN et al and Jarrard et al are both in the field of image analysis, especially for specifically prostate cancer with machine learning using feature parameter/vector and classification such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify AKSELROD-BALLIN et al by Jarrard et al regarding selected set of image data such selected images used create a machine learning with precision above 95% as disclosed by Jarrard et al in 0081.
Claim 16:
AKSELROD-BALLIN et al (US 2020/0395123) teaches:
A program causing a computer (0051-0053 teaches computer program) to function as:
an image processor that, when an image is input, calculated a likelihood of the inputted image belonging to a first image class by reducing a dimensionality (0171 teaches resizing for standardization (predetermined size) for images) of the inputted image using with an image model with a network having a plurality of layers , and regards, as, and a feature parameter of the inputted image, an intermediate vector output from an intermediate layer of the plurality of layers (0047-0050 teaches neural network with intermediate layer for output of feature vectors (feature parameters extracted) of anatomical images for likelihood of malignancy target tissue of images inputted; paragraph 0003-0005 detail further intermediate layers; 0011 detail feature vector; 0033 teaches feature vector outputted by intermediate layer and fed into a classifier component of the model; 0090 teaches neural network with number of layers, where 0172-0173 detail DNN layer for feature extraction and layer for batch normalization for classification layer);
a feature processor that, when an image group is input, inputs images included in the inputted image group into the image processor to calculate the likelihoods and the feature parameter parameters respectively (0012 detail computing plurality of anatomical images, selected from the group consisting of: likelihood of malignancy for the images, and maximum of the likelihood of malignancy),
outputs, as a feature information of the image group, a vector, a tensor, or an array obtained by arranging intermediate vectors regarded as target-a the feature parameter parameters calculated for the selected predetermined number of representative images (011-0012 teaches feature vector further computed inputted images first and second (image group)…etc for a sum of likelihood of malignancy; 0097 detail multiple images fed into neural network for feature vector to compute relationship for likelihood of malignancy in first, second images…etc); and a classification processor that, when a target image group related to a target and additional data related to the target are input, predicts whether the target belongs to a first target class or not, from feature information output from the feature processor by inputting the inputted target image group into the feature processor (0165-0168 teaches additional details such as data such as: gynecologic history (e.g., age at menarche, number of pregnancies, number of children, menopausal status), personal history of breast cancer, family history of breast and ovarian cancer, self-reported symptoms, and previous procedures. BI-RADS assessments were reported for DM and US separately; past BD and current recommendations were extracted from the radiologist's report, where 0168 detail factored into obtain the final value indicative of likelihood of malignancy, when running the final classification on the two tasks (cancer-positive biopsy prediction, normal identification); 0175 detail feature contribution analysis (additional data) such as family history), and the inputted additional data (0165-0168 teaches additional details such as data such as: gynecologic history (e.g., age at menarche, number of pregnancies, number of children, menopausal status), personal history of breast cancer, family history of breast and ovarian cancer, self-reported symptoms, and previous procedures) by using a classification model (0033 teaches feature vector outputted by intermediate layer and fed into a classifier component of the model).
AKSELROD-BALLIN et al teaches multiple images fed into neural network for feature vector to compute relationship for likelihood of malignancy in first, second images for selected image in paragraph 0097, but do not teach selects, based on the calculated likelihoods, a predetermined number of representative images from the inputted image group.
Jarrard et al (US 2020/0202525) teaches:
selects, based on the calculated likelihoods, a predetermined number of representative images from the inputted image group (0081 teaches selected image dataset for pathologic abnormalities associated with prostate cancer presence, where 0050 detail use for classification with machine learning for feature data regarding tumor grade).
AKSELROD-BALLIN et al and Jarrard et al are both in the field of image analysis, especially for specifically prostate cancer with machine learning using feature parameter/vector and classification such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify AKSELROD-BALLIN et al by Jarrard et al regarding selected set of image data such selected images used create a machine learning with precision above 95% as disclosed by Jarrard et al in 0081.
Claim 17:
AKSELROD-BALLIN et al teaches:
A non-transitory computer-readable information recording medium on which the program according to claim 16 is stored (0052 teaches computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.).
Claims 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over AKSELROD-BALLIN et al (US 2020/0395123) in view of Jarrard et al (US 2020/0202525) as applied to claim 1 above, and further in view of Vos et al (US 2021/0217524).
Claim 4:
AKSELROD-BALLIN et al and Jarrard et al, especially AKSELROD-BALLIN et al teaches the following subject matter: The feature extraction device according to claim 1, wherein
the target image group includes a plurality of images in which a prostate of the target is captured by ultrasound (0033 teaches use of ultrasound for target tissue),
the first target class is a class that indicates that the target is suffering from prostate cancer (0075-0079 teaches device for outputting likelihood target issue of anatomical images of prostate, colon, breast...etc cancer).
AKSELROD-BALLIN et al teaches do not teach: the additional data includes an age, a PSA value, a TPV value, and a PSAD value of the target.
Vos et al (US 2021/0217524) teaches:
the additional data includes an age (0037 teaches age consideration), a PSA value, a TPV value, and a PSAD value of the target (0038 teaches data associated with cancerous lesion or tumor of subject’s prostate such as PSA, prostate volume in ml (TPV) and PSA density (PSAD), where 0034 teaches such application applied to prostate cancer using machine learning with classifier and feature).
AKSELROD-BALLIN et al and Jarrard et al and Vos et al are both in the field of processing of using machine learning/neural network for assessing prostate cancer with feature and classifier such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify AKSELROD-BALLIN et al and Jarrard et al by Vos et al such that information above assist in determined based on all of the information available to a medical professional before surgery is performed on a cancerous tumor as disclosed by Vos et al on 0038.
Claim 5:
Vos et al teaches:
The feature extraction device according to claim 4, wherein in training data (0019-0020 teaches training of a classifier with set of training data) of the image model, the first image class is a class that indicates that, in a biopsy specimen (0007 teaches biopsy acquired, where 0017-0019 detail from prostate), a Gleason score (0038 teaches tumors may also be graded using a Gleason score. The Gleason score is based on how much the cancer looks like healthy tissue), assigned to a specimen site corresponding to an image site captured in the image, is greater than or equal to a predetermined value (0038 detail values such as PSA value, a TPV value, and a PSAD (predetermined values) of target for determination).
Clam 6:
Vos et al teaches:
The feature extraction device according to claim 4, wherein in training data of the image model, the first image class is a class that indicates that a target related to the image is suffering from prostate cancer (0037-0038 teaches subject suffering from prostate cancer).
Allowable Subject Matter
Claims 7-8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find the subject matter of claim 7.
Claim 9 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find the subject matter of claim 9.
Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find the subject matter of claim 10.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Metzger et al (US 2016/0292855) teaches MEDICAL IMAGING DEVICE RENDERING PREDICTIVE PROSTATE CANCER VISUALIZATIONS USING QUANTITATIVE MULTIPARAMETRIC MRI MODELS
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656