Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 12 objected to because of the following informalities:
Claim 12 line 1 “a report” should be “the report”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-4, 7-13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lay et al. (US 20190370965 A1) in view of Metzger et al. (US 20160292855 A1).
Regarding claim 1, Lay et al. teaches a system for detecting lesions on a multiparametric prostate using magnetic resonance imaging (see para [00032]; “Multi-parametric magnetic resonance imaging (mpMRI) has been demonstrated to be an accurate imaging technique to detect prostate cancer”), comprising: a magnetic resonance imaging (MRI) device for generating a plurality of MRI images of a prostate for a patient (see para [0006]; “An example system classifies individual pixels inside the prostate as potential sites of cancer using a combination of spatial, intensity and texture features extracted from three imaging sequences: T2W, ADC, and b-2000 images”, see also para [0089]; “The MR images are obtained from a 3.0 T whole-body MRI system. T2-weighted MR images of the entire prostate were obtained in axial plane at the scan resolution of 0.2734×0.2734×3.0 mm.sup.3; field of view 140 mm; image slice dimension 512×512. The center of the prostate is the focal point for the MRI scan”); and at least one computing device in operable communication with the MRI device (see claim 1; “the method comprising: receiving mpMRI data for a prostate; producing prostate cancer probability images”), the at least one computing device configured to detect lesions on the prostate of the patient by (see para [00032]; “Multi-parametric magnetic resonance imaging (mpMRI) has been demonstrated to be an accurate imaging technique to detect prostate cancer”): segmenting a plurality of MRI images to define (see para [0009]; “we describe both patch-based and holistic (image-to-image) deep learning methods for segmentation of the prostate”): a first zone of the prostate depicted in each of the plurality of images, and a second zone of the prostate depicted in each of the plurality of image, the second zone surrounding the first zone, wherein each of the first zone and the second zone including anatomical data relating to the prostate (see para [0062]; “The top 5 most selected features by Random Forest were the signed distance to the transition zone, the T2W mean and median, the B2000 median and the ADC median. The distance map feature was twice as likely to be picked as T2W mean and is especially useful to differentiate between the transition zone and peripheral zone”, see also para [0069]; “Generally lesions found in either one of the zones have similar intensity characteristics, but the backgrounds surrounding the two different regions are visually distinct”); collapsing the plurality of MRI images into a single, combined image (see para [0043]; “produce a sequence-specific probability map…These probability maps can then be used as supplemental features to Random Forest along with image features to produce a final probability map”, see also para [0006]; “a combination of spatial, intensity and texture features extracted from three imaging sequences: T2W, ADC, and b-2000 images”, Note: producing probability images from T2W, ADC, and b-2000 images and used as supplemental features to produce a final probability map i.e., “single combined image”); denoising of the single, combined image (see para [0043]; “The image filter in the second configuration subtracts the prostate intensity mean from the image and then sets resulting pixels to 0 when they have intensity less than or larger than 0 depending on sequence (T2W, ADC larger than 0, B2000 less than 0) This follows from the observation that lesions visible in T2W and ADC images have relatively lower intensity than normal prostate tissue and that lesions visible in B2000 tend to have relatively higher intensity”, Note: filtering for intensity normalization implies denoising step): and identifying at least one positive region of the plurality of regions where the region image intensity values for the at least one positive region is equal to or greater than the first image intensity threshold or the second image intensity threshold (see para [0057]; “If the 90.sup.th percentile of the CAD probability scores in a cancer volume exceeds a probability threshold, then this is taken to be a true positive. In other words, if at least 10% of the cancer volume has relatively high probability, then the CAD is said to have detected the cancerous lesion”); and identifying a portion of the prostate including a detected lesion based on the identified at least one positive region of the plurality of regions, the identified portion of the prostate corresponding to: the first zone or the second zone, and an anatomic location of the prostate based on the anatomical data relating to the prostate (see para [0017]; “The red region in the “Annotation” column is the hand-drawn cancerous lesion annotation. The red regions in the “Probability Map” column denote higher predicted probability of cancer while green and blue denote low predicted probability of cancer…… The red regions correspond to positive (cancer) examples, while the green regions correspond to negative (normal) examples. The blue region in the ‘Contours’ image represents a 3 mm band where neither positive nor negatives examples are sampled”). However, Lay et al. does not teach rewindowing of the single, combined image by: determining a first image intensity spectrum for the first zone of the prostate depicted in the single, combined image, determining a second image intensity spectrum for the second zone of the prostate depicted in the single, combined image, and defining: a first image intensity threshold for the first zone of the prostate based on the first image intensity spectrum, and a second image intensity threshold for the second zone of the prostate based on the second image intensity spectrum; dividing the single, combined image into a plurality of distinct regions, each of the plurality of regions associated with one of the first zone or the second zone of the prostate, concatenating each of the plurality of regions to identify individual region image intensity values, comparing the region image intensity value for each of the plurality of regions to the first image intensity threshold or the second image intensity threshold based upon the region's association with the first zone or the second zone of the prostate.
In the same field of endeavor, Metzger et al. teach rewindowing of the single, combined image by: determining a first image intensity spectrum for the first zone of the prostate depicted in the single, combined image, determining a second image intensity spectrum for the second zone of the prostate depicted in the single, combined image, and defining: a first image intensity threshold for the first zone of the prostate based on the first image intensity spectrum, and a second image intensity threshold for the second zone of the prostate based on the second image intensity spectrum (see para [0030]; “model 16 may, in some examples, be a set of coefficients for respective parameters and/or threshold CBS values. For each value of the parameter maps for imaged tissue, the parameter values may be plugged into the equation, and the resulting CBS value may be evaluated against the threshold CBS values.”, see also claim 4; “the specified threshold comprises determining locations of the CBS map that correspond to CBS values that are greater than a first CBS value and less than a second CBS value”, Note: the CBS distribution over the prostate constitute the intensity spectrum, and threshold CBS values constitute the claimed thresholds); by: dividing the single, combined image into a plurality of distinct regions, each of the plurality of regions associated with one of the first zone or the second zone of the prostate, concatenating each of the plurality of regions to identify individual region image intensity values, (see para [0005]; “generate at least one Composite Biomarker Score (CBS) for the imaged tissue of the patient…… The processor is further configured to generate and output, based on the respective CBS for each voxel of the imaged tissue”, Note: computes CBS for each voxel, each voxel is distinct region, each voxel has known, fixed dimension and these voxel-regions collectively divide the combined image), comparing the region image intensity value for each of the plurality of regions to the first image intensity threshold or the second image intensity threshold based upon the region's association with the first zone or the second zone of the prostate (see para [0030]; “the resulting CBS value may be evaluated against the threshold CBS values”, see also claim 3; “determining locations of the CBS map that correspond to CBS values that satisfy a specified threshold”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify a method for prostate cancer computer-aided diagnosis (CAD) systems using a Random Forest classifier to detect prostate cancer of Lay et al. in view of medical of predictive prostate cancer visualizations using quantitative multiparametric magnetic resonance imaging (mpMRI) models of Metzger et al. in order to provide techniques for developing and using mpMRI models for user-independent (see para [0030]).
Regarding claim 2, the rejection of claim 1 is incorporated herein.
Lay et al. in the combination further teach wherein the at least one computing device is configured to detect the lesions on the prostate of the patient further by: probabilities for respective risk categories associated with the detected lesion, or options for modifications for the report (see para [0017]; “The red regions in the “Probability Map” column denote higher predicted probability of cancer while green and blue denote low predicted probability of cancer”).
Metzger et al. in the combination further teach generating a report based on the identifying of the portion of the prostate including the detected lesion, the generated report including at least one of: visual data relating to the identified portion of the prostate including the detected lesion (see para [0005]; “The processor is further configured to generate and output, based on the respective CBS for each voxel of the imaged tissue, a visual indication of whether the corresponding imaged tissue is predicted to include cancer. The indication may, for example, comprise an overlay image for the medical imaging data for the imaged tissue, the overlay including regions of the predicted cancer”).
Regarding claim 3, the rejection of claim 2 is incorporated herein.
Metzger et al. in the combination further teach wherein the generated report further includes a set of anatomic coordinates corresponding to the location of the detected lesion on the prostate, the set of anatomic coordinates based on the anatomical data relating to the prostate (see para [0006]; “generate a respective Composite Biomarker Score (CBS) for each voxel of the imaged tissue”).
Regarding claim 4, the rejection of claim 1 is incorporated herein.
Metzger et al. in the combination further teach wherein the segmenting of the plurality of MRI images further includes: adjusting at least one of the first zone of the prostate or the second zone of the prostate based on user input (see para [0029]; “model generation module may modify or otherwise adjust histopathology data 24 to “fit” the corresponding medical imaging training data”).
Regarding claim 7, the rejection of claim 1 is incorporated herein
Metzger et al. in the combination further teach wherein each of the plurality of distinct regions include a predetermined dimension (see para [0005]; “output, based on the respective CBS for each voxel of the imaged tissue”, Note: voxels have fixed size i.e., predetermined region dimension).
Regarding claim 8, the rejection of claim 1 is incorporated herein
Metzger et al. in the combination further teach wherein a prostate-specific antigen test determines the threshold value of a threshold tuning (see para [0030]; “the model may represent an equation that, when applied to values of parameters in medical imaging data 18, results in a score (e.g., a Composite Biomarker Score or CBS) that indicates whether or not the corresponding tissue is likely to be cancer. For instance, model 16 may, in some examples, be a set of coefficients for respective parameters and/or threshold CBS values. For each value of the parameter maps for imaged tissue, the parameter values may be plugged into the equation, and the resulting CBS value may be evaluated against the threshold CBS values”).
Regarding claim 9, the rejection of claim 1 is incorporated herein
Metzger et al. in the combination further teach wherein the denoising of the single, combined image further includes: in response to determining no region image intensity value for each of the plurality of regions is equal to or greater than the first image intensity threshold or the second image intensity threshold, adjusting at least one of: the first image intensity threshold for the first zone of the prostate based on the first image intensity spectrum, or the second image intensity threshold for the second zone of the prostate based on the second image intensity spectrum; and comparing the region image intensity value for each of the plurality of regions to the adjusted first image intensity threshold or the adjusted second image intensity threshold (see para [0030]; “For each value of the parameter maps for imaged tissue, the parameter values may be plugged into the equation, and the resulting CBS value may be evaluated against the threshold CBS values. If the threshold values are satisfied, analysis system 10 may indicate that the corresponding tissue is likely cancer”, see also claim 4; “wherein determining the locations of the CBS map that correspond to CBS values that satisfy the specified threshold comprises determining locations of the CBS map that correspond to CBS values that are greater than a first CBS value and less than a second CBS value”, Note: changing threshold when no suprathreshold region appears is an obvious optimization).
Regarding claim 10, the scope of claim 10 is fully incorporated in claim 1, and the rejection of claim 1 analysis is equally applicable here.
Regarding claim 11, the rejection of claim 10 is incorporated herein.
Lay et al. in the combination further teach probabilities for respective risk categories associated with the detected lesion, or options for modifications for the report (see para [0017]; “The red regions in the “Probability Map” column denote higher predicted probability of cancer while green and blue denote low predicted probability of cancer”).
Metzger et al. in the combination further teach generating a report based on the identifying of the portion of the prostate including the detected lesion, the generated report including at least one of: visual data relating to the identified portion of the prostate including the detected lesion (see para [0005]; “The processor is further configured to generate and output, based on the respective CBS for each voxel of the imaged tissue, a visual indication of whether the corresponding imaged tissue is predicted to include cancer. The indication may, for example, comprise an overlay image for the medical imaging data for the imaged tissue, the overlay including regions of the predicted cancer”).
Regarding claim 12, the rejection of claim 11 is incorporated herein.
Metzger et al. in the combination further teach wherein generated a report further includes a set of anatomic coordinates corresponding to the location of the detected lesion on the prostate, the set of anatomic coordinates based on the anatomical data relating to the prostate (see para [0006]; “generate a respective Composite Biomarker Score (CBS) for each voxel of the imaged tissue”).
Regarding claim 13, the rejection of claim 10 is incorporated herein.
Metzger et al. in the combination further teach wherein the segmenting of the plurality of MRI images further includes: adjusting at least one of the first zone of the prostate or the second zone of the prostate based on user input (see para [0029]; “model generation module may modify or otherwise adjust histopathology data 24 to “fit” the corresponding medical imaging training data”).
Regarding claim 15, the rejection of claim 10 is incorporated herein
Metzger et al. in the combination further teach further including predetermining dimension for each of the plurality of distinct regions (see para [0005]; “output, based on the respective CBS for each voxel of the imaged tissue”, Note: voxels have fixed size i.e., predetermined region dimension).
Regarding claim 16, the rejection of claim 10 is incorporated herein
Metzger et al. in the combination further teach wherein determining the threshold value of a threshold tuning is based upon a prostate-specific antigen test (see para [0030]; “the resulting CBS value may be evaluated against the threshold CBS values”, Note: region score=CBS).
Regarding claim 17, the rejection of claim 10 is incorporated herein
Metzger et al. in the combination further teach wherein the denoising of the single, combined image further includes: in response to determining no region image intensity value for each of the plurality of regions is equal to or greater than the first image intensity threshold or the second image intensity threshold, adjusting at least one of: the first image intensity threshold for the first zone of the prostate based on the first image intensity spectrum, or the second image intensity threshold for the second zone of the prostate based on the second image intensity spectrum; and comparing the region image intensity value for each of the plurality of regions to the adjusted first image intensity threshold or the adjusted second image intensity threshold (see para [0030]; “For each value of the parameter maps for imaged tissue, the parameter values may be plugged into the equation, and the resulting CBS value may be evaluated against the threshold CBS values. If the threshold values are satisfied, analysis system 10 may indicate that the corresponding tissue is likely cancer”, see also claim 4; “wherein determining the locations of the CBS map that correspond to CBS values that satisfy the specified threshold comprises determining locations of the CBS map that correspond to CBS values that are greater than a first CBS value and less than a second CBS value”, Note: changing threshold when no suprathreshold region appears is an obvious optimization).
Regarding claim 18, the scope of claim 18 is fully incorporated in claim 1, and the rejection of claim 1 analysis is equally applicable here (see also para [0033]; “computer-implemented device 500 includes a processor 510 that is operable to execute program instructions or software, causing the computer to perform various methods or tasks” of Metzger et al.).
Regarding claim 19, the rejection of claim 18 is incorporated herein.
Lay et al. in the combination further teach wherein the program instruction executed by the processor causes the computing device to further: probabilities for respective risk categories associated with the detected lesion, or options for modifications for the report (see para [0017]; “The red regions in the “Probability Map” column denote higher predicted probability of cancer while green and blue denote low predicted probability of cancer”).
Metzger et al. in the combination further teach generate a report based on the identifying of the portion of the prostate including the detected lesion, the generated report including at least one of: visual data relating to the identified portion of the prostate including the detected lesion (see para [0005]; “The processor is further configured to generate and output, based on the respective CBS for each voxel of the imaged tissue, a visual indication of whether the corresponding imaged tissue is predicted to include cancer. The indication may, for example, comprise an overlay image for the medical imaging data for the imaged tissue, the overlay including regions of the predicted cancer”).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Lay et al. in view of Metzger al. as applied in claim 1 above, and further in view of Punchard et al. (US 8536870 B2).
Regarding claim 5, the rejection of claim 1 is incorporated herein. The combination of Lay et al. and Metzger et al. as a whole does not teach wherein the plurality of MRI images include: a low intensity MRI-image ranging from 0.7 T - 1.2 T, a mid-intensity MRI-image ranging from 1.3 T - 1.9 T; and a high-intensity MRI-image ranging from 2.0 T - 3.0 T.
In the same field of endeavor, Feldman et al. teach wherein the plurality of MRI images include: a low intensity MRI-image ranging from 0.7 T - 1.2 T, a mid-intensity MRI-image ranging from 1.3 T - 1.9 T; and a high-intensity MRI-image ranging from 2.0 T - 3.0 T (see col. 4, lines 32-35; “The range of magnetic field strengths typically used for clinical in-vivo imaging in superconductive magnets is from about 0.5 T to 3.0 T. Open structure magnets usually have a magnetic field strength in the range from about 0.2 T to 1.2 T”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify a method for prostate cancer computer-aided diagnosis (CAD) systems using a Random Forest classifier to detect prostate cancer of Lay et al. in view of medical of predictive prostate cancer visualizations using quantitative multiparametric magnetic resonance imaging (mpMRI) models of Metzger et al. and a method for correcting high-degree and high-order magnetic field inhomogeneities over a limited examination zone in a magnetic resonance of Punchard et al. in order to improve the quality of in-vivo magnetic resonance spectroscopy and imaging of any desired anatomic site (see col. 4, lines 32-35]).
Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lay et al. and Metzger et al. in view Punchard et al. as applied in claims 1 and 5 above, and further in view of Feldman et al. (US 20100329529 A1) and of Ochiai et al. NPL “Diffusion-weighted whole-body imaging with background body signal suppression (DWIBS): features and potential applications in oncology”.
Regarding claim 6, the rejection of claim 5 is incorporated herein. The combination of Lay et al., Metzger et al. and Punchard et al. as a whole does not teach wherein the collapsing of the plurality of MRI images further includes: aligning each of the low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image based on the defined first zone and the second zone in each of the low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image, normalizing each of the aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image; segmenting the aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image, inverting a high-b value for each of the aligned low intensity MRI-image, the mid- intensity MRI image, and the high-intensity MRI-image; and inverting the segmented and aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image.
In the same field of endeavor, Feldman et al. teach wherein the collapsing of the plurality of MRI images further includes: aligning each of the low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image based on the defined first zone and the second zone in each of the low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see para [0264]; “BiasCorrector algorithm was used to correct each of the original 2D MR images, .zeta..sup.T2 and .zeta..sup.D,5, for bias field inhomogeneity. Intensity standardization was then used to correct for the non-linearity in MR image intensities on .zeta..sup.T2 alone to ensure that the T2-w intensities have the same tissue-specific meaning across images within the same study, as well as across different patient studies. All data was analyzed at the DCE-MRI resolution, with appropriate alignment being done as described”); normalizing each of the aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see para [0006]; “correcting bias field inhomogeneity and non-linear MR intensity artifacts, thereby creating a corrected T1-w or T2-w MR scene”); segmenting the aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see para [0006]; “an unsupervised method of segmenting regions on an in-vivo tissue (T1-w or T2-w or DCE) MRI ..correcting bias field inhomogeneity and non-linear MR intensity artifacts, thereby creating a corrected T1-w or T2-w MR scene; extracting image features from the T1-w or T2-w MR scene; embedding the extracted image features or inherent kinetic features or a combination thereof into a low dimensional space, … wherein the clustering is achieved by partitioning the features in the embedded space to disjointed regions, thereby creating classes and therefore segmenting the embedded space”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify a method for prostate cancer computer-aided diagnosis (CAD) systems using a Random Forest classifier to detect prostate cancer of Lay et al. in view of medical of predictive prostate cancer visualizations using quantitative multiparametric magnetic resonance imaging (mpMRI) models of Metzger et al. and a method for correcting high-degree and high-order magnetic field inhomogeneities over a limited examination zone in a magnetic resonance of Punchard et al. and computer-assisted diagnostics and classification of prostate cancer of Feldman et al. in order to increase specificity and sensitivity in the detection and classification of prostate cancer (see para [0264]). However, the combination of Lay et al., Metzger et al. Punchard et al. and Feldman et al. as a whole does not teach inverting a high-b value for each of the aligned low intensity MRI-image, the mid- intensity MRI image, and the high-intensity MRI-image; and inverting the segmented and aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image.
In the same field of endeavor, Ochiai et al. teaches inverting a high-b value for each of the aligned low intensity MRI-image, the mid- intensity MRI image, and the high-intensity MRI-image; and inverting the segmented and aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see Fig. 4; “Fig. 4 Coronal maximum intensity projection DWIBS image (inverted black-and-white gray scale)”, see also page 1941, right col., 1st para; “Inverting the gray scale of DWIBS images makes them resemble PET-like images (Figs. 4, 5). DWIBS can be performed on state-of-the-art MRI systems”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify a method for prostate cancer computer-aided diagnosis (CAD) systems using a Random Forest classifier to detect prostate cancer of Lay et al. and a predictive prostate cancer visualizations using quantitative multiparametric magnetic resonance imaging (mpMRI) models of Metzger et al. in view of a method for correcting high-degree and high-order magnetic field inhomogeneities over a limited examination zone in a magnetic resonance of Punchard et al. and computer-assisted diagnostics and classification of prostate cancer of Feldman et al. and further in view of Diffusion-weighted whole-body imaging with background body signal suppression in oncology of Ochiai et al. in order to provide functional information and detection of malignant tumors (see Fig. 4).
Regarding claim 14, the rejection of claim 10 is incorporated herein
Feldman et al. in the combination further teach wherein the collapsing of the plurality of MRI images further includes: aligning each of the low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image based on the defined first zone and the second zone in each of the low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see para [0264]; “BiasCorrector algorithm was used to correct each of the original 2D MR images, .zeta..sup.T2 and .zeta..sup.D,5, for bias field inhomogeneity. Intensity standardization was then used to correct for the non-linearity in MR image intensities on .zeta..sup.T2 alone to ensure that the T2-w intensities have the same tissue-specific meaning across images within the same study, as well as across different patient studies. All data was analyzed at the DCE-MRI resolution, with appropriate alignment being done as described”); normalizing each of the aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see para [0006]; “correcting bias field inhomogeneity and non-linear MR intensity artifacts, thereby creating a corrected T1-w or T2-w MR scene”); segmenting the aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see para [0006]; “an unsupervised method of segmenting regions on an in-vivo tissue (T1-w or T2-w or DCE) MRI ..correcting bias field inhomogeneity and non-linear MR intensity artifacts, thereby creating a corrected T1-w or T2-w MR scene; extracting image features from the T1-w or T2-w MR scene; embedding the extracted image features or inherent kinetic features or a combination thereof into a low dimensional space, … wherein the clustering is achieved by partitioning the features in the embedded space to disjointed regions, thereby creating classes and therefore segmenting the embedded space”).
In the same field of endeavor, Ochiai et al. teaches inverting a high-b value for each of the aligned low intensity MRI-image, the mid- intensity MRI image, and the high-intensity MRI-image; and inverting the segmented and aligned low intensity MRI-image, the mid-intensity MRI image, and the high-intensity MRI-image (see Fig. 4; “Fig. 4 Coronal maximum intensity projection DWIBS image (inverted black-and-white gray scale)”, see also page 1941, right col., 1st para; “Inverting the gray scale of DWIBS images makes them resemble PET-like images (Figs. 4, 5). DWIBS can be performed on state-of-the-art MRI systems”). Accordingly, it would have been obvious to one of ordinary skill in the art before the invention of the claimed invention to modify a method for prostate cancer computer-aided diagnosis (CAD) systems using a Random Forest classifier to detect prostate cancer of Lay et al. and a predictive prostate cancer visualizations using quantitative multiparametric magnetic resonance imaging (mpMRI) models of Metzger et al. in view of a method for correcting high-degree and high-order magnetic field inhomogeneities over a limited examination zone in a magnetic resonance of Punchard et al. and computer-assisted diagnostics and classification of prostate cancer of Feldman et al. and further in view of Diffusion-weighted whole-body imaging with background body signal suppression in oncology of Ochiai et al. in order to provide functional information and detection of malignant tumors (see Fig. 4).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WINTA GEBRESLASSIE/Examiner, Art Unit 2677