Prosecution Insights
Last updated: April 19, 2026
Application No. 18/912,570

METHOD FOR HARMONIZING MEDICAL IMAGES AND DEVICE FOR HARMONIZING MEDICAL IMAGES USING THE SAME

Non-Final OA §102§103§112
Filed
Oct 10, 2024
Examiner
ANSARI, TAHMINA N
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Ontact Health Co. Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
743 granted / 868 resolved
+23.6% vs TC avg
Strong +18% interview lift
Without
With
+17.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
901
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 868 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status Claims 1-20 are pending in this application. The present application is being examined under the pre-AIA first to invent provisions. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Republic of Korea on October 5, 2023. It is noted, however, that applicant has NOT filed a certified copy 11of the KR10-2023-0132743 application as required by 37 CFR 1.55. Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Republic of Korea on October 4, 2024 is also made and that applicant has filed a certified copy of the KR10-2024-0135141 application as required by 37 CFR 1.55. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. 35 U.S.C. § 112 Sixth Paragraph - Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “unit” in claims 11-20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the second paragraph of 35 U.S.C. 112: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for the following reasons. Independent Claims 1 line 1 is directed to “A method for harmonizing a medical image…” and Claim 11 line 1 is directed to “A device for harmonizing medical images…” and the use of “harmonizing a medical image” or “harmonizing medical images” leads to indefiniteness. The following limitations are recited in a manner that leads to indefiniteness as the words of the claim are not consistent with the broadest reasonable interpretation and the “plain meaning of a term” as described in the MPE2111.01. In accordance with the “plain meaning” of a claim, the limitations are to be interpreted in accordance with “the ordinary and customary meaning given to the term by those of ordinary skill in the art at the relevant time”. In a cursory search it appears that the term “harmonizing” as used in the art appears to be more narrowly directed to a resultant output from a Fourier transform and it does not appear to be consistent with the manner that the applicant is claiming this limitation or the overall description of it in the written disclosure. The written disclosure uses “harmonizing” as a term for a “standardization” process or a filtering process as described in [0057]-[0059] and loosely described as a convolution filter in [0127] and [0136]. Applicant is requested to amend the claims to be more consistent with the plain meaning that is intended for the scope of the claim and to rely on specific features from the disclosure that would facilitate examination if applicant intends a different scope. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 11 and 17 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Dhaouadi et al. (US PGPub US2023/0267604A1, filed Feb. 24, 2023 with priority dating to Feb. 24, 2022), hereby referred to as “Dhaouadi”. Consider Claims 1, 11 and 17. Dhaouadi teaches: 1. (Currently Amended) A method for harmonizing a medical image implemented by a processor, comprising: / 11. (Currently Amended) A device for harmonizing medical images, comprising: / 17. (Currently Amended) A device for harmonizing medical images, comprising: (Dhaouadi: abstract, A computer system that analyzes medical-imaging data to assess a risk for prostate cancer is described. The computer system may compute features (including intensity, texture and/or spatial features) based at least in part on the medical-imaging data. Then, using a pretrained predictive model, the computer system may determine cancer predictions on a voxel-by-voxel basis, based at least in part on the computed features. Note that the pretrained predictive model may include a boosted parallel random forests (BPRF) model with a boosted ensemble of bagging ensemble models, where a given bagging ensemble model includes an ensemble of random forests models. Next, the computer system may provide feedback based on the cancer predictions for the voxels. For a given voxel, the feedback may include a cancer prediction and a location. In some embodiments, for the given voxel, the feedback may include an aggressiveness of the predicted cancer and/or a recommended therapy.[0044]-[0049] Figure 1) 1. receiving the medical image for an object; and / 11. a communication unit configured to receive a medical image for an object; / 11. and a processor functionally connected to the communication unit, wherein the processor is configured to / 17. a communication unit configured to receive a medical image for an object; (Dhaouadi: [0048] Furthermore, memory modules 116 may access stored data or information in memory that local in computer system 100 and/or that is remotely located from computer system 100. Notably, in some embodiments, one or more of memory modules 116 may access stored measurement results in the local memory, such as MRI data for one or more individuals (which, for multiple individuals, may include cases and controls or disease and healthy populations). Alternatively or additionally, in other embodiments, one or more memory modules 116 may access, via one or more of communication modules 112, stored measurement results in the remote memory in computer 124, e.g., via network 120 and network 122. Note that network 122 may include: the Internet and/or an intranet. In some embodiments, the measurement results are received from one or more measurement systems 126 (such as MRI scanners) via network 120 and network 122 and one or more of communication modules 112. Thus, in some embodiments at least some of the measurement results may have been received previously and may be stored in memory, while in other embodiments at least some of the measurement results may be received in real-time from the one or more measurement systems 126. [0049] While FIG. 1 illustrates computer system 100 at a particular location, in other embodiments at least a portion of computer system 100 is implemented at more than one location. Thus, in some embodiments, computer system 100 is implemented in a centralized manner, while in other embodiments at least a portion of computer system 100 is implemented in a distributed manner.) 1. harmonizing the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input, / 11. perform harmonization on the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input, / 17. a processor functionally connected to the communication unit, wherein the processor is configured to perform harmonization on the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input (Dhaouadi: [0076] If all the series of the study pass quality filters, the image volumes for the T2W, DWI and ADC may be passed to the computer-aided detection and diagnosis operation in the analysis pipeline. This is shown in FIG. 6 , which presents a drawing illustrating an example of computer-aided detection and diagnosis in an analysis pipeline. Notably, during the computer-aided detection and diagnosis, the computer system may segment the volume corresponding to the prostate organ, producing a prostate mask volume. Then, the computer system may segment the volume corresponding to the sub-region of the prostate called the central gland, producing a central gland mask volume. For example, the segmenting may be performed using two 3D convolutional neural networks. Notably, a first 3D convolutional neural network may detect the prostate at low resolution in a T2W series, and a second 3D convolutional neural network may then perform segmentation of a border of the prostate with higher accuracy (such as a variation on a deep residual network or ResNET). This segmentation of the border may identify a peripheral zone of the prostate and a transition (central) zone of the prostate.) 1. using the predictive model, generating a mask for a region of interest (ROI) in the filtered medical image, and / 11. and generate a mask for a region of interest in the filtered medical image / 17. ,wherein the convolutional filter corresponds to a first convolution layer of the predictive model, (Dhaouadi: [0078] Next, the computer system may normalize image volume intensities to a predefined range (e.g., a minimum and a maximum intensity). Furthermore, the computer system may resize image volumes (e.g., normalized T2W, normalized DWI, normalized ADC, prostate mask, and central gland mask) to a predefined size. For example, the centroid of the prostate may be calculated and used to define a 140 mm FOV region (edge-to-edge) centered on the prostate organ. Note that the volumes may be resampled to this FOV to a voxel grid of 512×512 in the XY plane. The voxel spacing in the Z dimension may be determined by the T2W volume. [0079] Additionally, the computer system may calculate image features using the image volumes (e.g., normalized T2W, normalized DWI, normalized ADC, prostate mask, and central gland mask) using radiomics techniques, such as an intensity of a voxel or a function applied to the neighborhood of voxels surrounding a central voxel. Note that the image features may be calculated from individual image voxels (e.g., voxel intensity) and/or groups of voxels surrounding a reference voxel (e.g., mean voxel intensity, correlation, contrast, entropy, and/or another characteristic of the groups of voxels). Note that the calculated features may be stored as an image-feature data frame and/or the data frame may store the voxel index and image feature values for voxels corresponding to the prostate) 1. extracting features based on the region of interest or the mask, wherein the convolutional filter corresponds to a first convolutional layer of the predictive model. / 11. and extract features based on the region of interest or the mask, wherein the convolutional filter corresponds to a first convolution layer of the predictive model. / 17. wherein the predictive model is an artificial neural network model based on a masked autoencoder configured to perform self-supervised learning on unique features of the medical image. (Dhaouadi: [0058] Then, the computer system may compute features (operation 212) associated with voxels corresponding to a prostate of the individual based at least in part on the medical-imaging data, where, for a given voxel, the features include: intensity features, texture features, and/or a spatial feature corresponding to a distance from a peripheral zone of the prostate to a transition zone of the prostate. Note that the intensity features may include radiomics features and the texture features may include Haralick texture features (which may be a subset of the radiomics features). For example, the Haralick texture features may include correlation PNG media_image1.png 497 458 media_image1.png Greyscale [0059] In some embodiments, there may be 64 radiomics features associated with each pixel or voxel associated with the prostate, including: 4 intensity features and 13 texture features (such as smooth versus grainy), and the spatial feature may include a Euclidean distance to the central gland of the prostate (such as a distance to a border of the transition zone). Additionally, for the DWI series, the image channel associated with the highest b-value may be used for feature extraction. [0061] Next, the computer system may provide the feedback (operation 216) based at least in part on the cancer predictions for the voxels, where, for the given voxel, the feedback includes: a cancer prediction and a location (which may be used to guide treatment, e.g., to provide a treatment recommendation). Moreover, the feedback may include an image indicating first regions of the prostate where the cancer predictions exceed a first threshold value. Furthermore, the image may indicate second regions of the prostate where the cancer predictions are less than the first threshold value and greater than a second threshold value. Additionally, the feedback may include an image with an at least partially transparent 3D rendering of the prostate and one or more color-coded regions in the 3D rendering corresponding to cancer predictions exceeding a threshold value. Note that, for the given voxel, the feedback may indicate an aggressiveness of predicted cancer based at least in part on the cancer predictions. In some embodiments, the feedback may include or correspond to a recommended therapy based at least in part on the cancer predictions. [0089] While the preceding discussion illustrated the analysis techniques with a BPRF model, more generally the analysis techniques may use a predictive model that is pretrained or predetermined using a machine-learning technique (such as a supervised learning technique, an unsupervised learning technique a neural network) and a training dataset. For example, the predictive model may include a classifier or a regression model that was trained using: random forests, a support vector machine technique, a classification and regression tree technique, logistic regression, LASSO, linear regression, a neural network technique (such as deep learning, a convolutional neural network technique, an autoencoder neural network or another type of neural network technique), a boosting technique, a bagging technique, another ensemble learning technique another linear or nonlinear supervised-learning) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dhaouadi et al. (US PGPub US2023/0267604A1, filed Feb. 24, 2023 with priority dating to Feb. 24, 2022), hereby referred to as “Dhaouadi”, in view of Muehlberg et al. (US PGPub US2022/0101070A1, hereby referred to as “Muehlberg”). Consider Claims 1, 11 and 17. Dhaouadi teaches: 1. (Currently Amended) A method for harmonizing a medical image implemented by a processor, comprising: / 11. (Currently Amended) A device for harmonizing medical images, comprising: / 17. (Currently Amended) A device for harmonizing medical images, comprising: (Dhaouadi: abstract, A computer system that analyzes medical-imaging data to assess a risk for prostate cancer is described. The computer system may compute features (including intensity, texture and/or spatial features) based at least in part on the medical-imaging data. Then, using a pretrained predictive model, the computer system may determine cancer predictions on a voxel-by-voxel basis, based at least in part on the computed features. Note that the pretrained predictive model may include a boosted parallel random forests (BPRF) model with a boosted ensemble of bagging ensemble models, where a given bagging ensemble model includes an ensemble of random forests models. Next, the computer system may provide feedback based on the cancer predictions for the voxels. For a given voxel, the feedback may include a cancer prediction and a location. In some embodiments, for the given voxel, the feedback may include an aggressiveness of the predicted cancer and/or a recommended therapy.[0044]-[0049] Figure 1) 1. receiving the medical image for an object; and / 11. a communication unit configured to receive a medical image for an object; / 11. and a processor functionally connected to the communication unit, wherein the processor is configured to / 17. a communication unit configured to receive a medical image for an object; (Dhaouadi: [0048] Furthermore, memory modules 116 may access stored data or information in memory that local in computer system 100 and/or that is remotely located from computer system 100. Notably, in some embodiments, one or more of memory modules 116 may access stored measurement results in the local memory, such as MRI data for one or more individuals (which, for multiple individuals, may include cases and controls or disease and healthy populations). Alternatively or additionally, in other embodiments, one or more memory modules 116 may access, via one or more of communication modules 112, stored measurement results in the remote memory in computer 124, e.g., via network 120 and network 122. Note that network 122 may include: the Internet and/or an intranet. In some embodiments, the measurement results are received from one or more measurement systems 126 (such as MRI scanners) via network 120 and network 122 and one or more of communication modules 112. Thus, in some embodiments at least some of the measurement results may have been received previously and may be stored in memory, while in other embodiments at least some of the measurement results may be received in real-time from the one or more measurement systems 126. [0049] While FIG. 1 illustrates computer system 100 at a particular location, in other embodiments at least a portion of computer system 100 is implemented at more than one location. Thus, in some embodiments, computer system 100 is implemented in a centralized manner, while in other embodiments at least a portion of computer system 100 is implemented in a distributed manner.) 1. harmonizing the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input, / 11. perform harmonization on the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input, / 17. a processor functionally connected to the communication unit, wherein the processor is configured to perform harmonization on the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input (Dhaouadi: [0076] If all the series of the study pass quality filters, the image volumes for the T2W, DWI and ADC may be passed to the computer-aided detection and diagnosis operation in the analysis pipeline. This is shown in FIG. 6 , which presents a drawing illustrating an example of computer-aided detection and diagnosis in an analysis pipeline. Notably, during the computer-aided detection and diagnosis, the computer system may segment the volume corresponding to the prostate organ, producing a prostate mask volume. Then, the computer system may segment the volume corresponding to the sub-region of the prostate called the central gland, producing a central gland mask volume. For example, the segmenting may be performed using two 3D convolutional neural networks. Notably, a first 3D convolutional neural network may detect the prostate at low resolution in a T2W series, and a second 3D convolutional neural network may then perform segmentation of a border of the prostate with higher accuracy (such as a variation on a deep residual network or ResNET). This segmentation of the border may identify a peripheral zone of the prostate and a transition (central) zone of the prostate.) 1. using the predictive model, generating a mask for a region of interest (ROI) in the filtered medical image, and / 11. and generate a mask for a region of interest in the filtered medical image / 17. ,wherein the convolutional filter corresponds to a first convolution layer of the predictive model, (Dhaouadi: [0078] Next, the computer system may normalize image volume intensities to a predefined range (e.g., a minimum and a maximum intensity). Furthermore, the computer system may resize image volumes (e.g., normalized T2W, normalized DWI, normalized ADC, prostate mask, and central gland mask) to a predefined size. For example, the centroid of the prostate may be calculated and used to define a 140 mm FOV region (edge-to-edge) centered on the prostate organ. Note that the volumes may be resampled to this FOV to a voxel grid of 512×512 in the XY plane. The voxel spacing in the Z dimension may be determined by the T2W volume. [0079] Additionally, the computer system may calculate image features using the image volumes (e.g., normalized T2W, normalized DWI, normalized ADC, prostate mask, and central gland mask) using radiomics techniques, such as an intensity of a voxel or a function applied to the neighborhood of voxels surrounding a central voxel. Note that the image features may be calculated from individual image voxels (e.g., voxel intensity) and/or groups of voxels surrounding a reference voxel (e.g., mean voxel intensity, correlation, contrast, entropy, and/or another characteristic of the groups of voxels). Note that the calculated features may be stored as an image-feature data frame and/or the data frame may store the voxel index and image feature values for voxels corresponding to the prostate) 1. extracting features based on the region of interest or the mask, wherein the convolutional filter corresponds to a first convolutional layer of the predictive model. / 11. and extract features based on the region of interest or the mask, wherein the convolutional filter corresponds to a first convolution layer of the predictive model. / 17. wherein the predictive model is an artificial neural network model based on a masked autoencoder configured to perform self-supervised learning on unique features of the medical image. (Dhaouadi: [0058] Then, the computer system may compute features (operation 212) associated with voxels corresponding to a prostate of the individual based at least in part on the medical-imaging data, where, for a given voxel, the features include: intensity features, texture features, and/or a spatial feature corresponding to a distance from a peripheral zone of the prostate to a transition zone of the prostate. Note that the intensity features may include radiomics features and the texture features may include Haralick texture features (which may be a subset of the radiomics features). For example, the Haralick texture features may include correlation PNG media_image1.png 497 458 media_image1.png Greyscale [0059] In some embodiments, there may be 64 radiomics features associated with each pixel or voxel associated with the prostate, including: 4 intensity features and 13 texture features (such as smooth versus grainy), and the spatial feature may include a Euclidean distance to the central gland of the prostate (such as a distance to a border of the transition zone). Additionally, for the DWI series, the image channel associated with the highest b-value may be used for feature extraction. [0061] Next, the computer system may provide the feedback (operation 216) based at least in part on the cancer predictions for the voxels, where, for the given voxel, the feedback includes: a cancer prediction and a location (which may be used to guide treatment, e.g., to provide a treatment recommendation). Moreover, the feedback may include an image indicating first regions of the prostate where the cancer predictions exceed a first threshold value. Furthermore, the image may indicate second regions of the prostate where the cancer predictions are less than the first threshold value and greater than a second threshold value. Additionally, the feedback may include an image with an at least partially transparent 3D rendering of the prostate and one or more color-coded regions in the 3D rendering corresponding to cancer predictions exceeding a threshold value. Note that, for the given voxel, the feedback may indicate an aggressiveness of predicted cancer based at least in part on the cancer predictions. In some embodiments, the feedback may include or correspond to a recommended therapy based at least in part on the cancer predictions. [0089] While the preceding discussion illustrated the analysis techniques with a BPRF model, more generally the analysis techniques may use a predictive model that is pretrained or predetermined using a machine-learning technique (such as a supervised learning technique, an unsupervised learning technique and/or a neural network) and a training dataset. For example, the predictive model may include a classifier or a regression model that was trained using: random forests, a support vector machine technique, a classification and regression tree technique, logistic regression, LASSO, linear regression, a neural network technique (such as deep learning, a convolutional neural network technique, an autoencoder neural network or another type of neural network technique), a boosting technique, a bagging technique, another ensemble learning technique and/or another linear or nonlinear supervised-learning technique.) Even if Dhaouadi does not teach: “harmonizing the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input” Muehlberg teaches: 1. (Currently Amended) A method for harmonizing a medical image implemented by a processor, comprising: / 11. (Currently Amended) A device for harmonizing medical images, comprising: / 17. (Currently Amended) A device for harmonizing medical images, comprising: (Muehlberg: abstract, A computer-implemented method is for providing radiomics-related information. In an embodiment, the computer-implemented method includes receiving radiomics-related data; determining, based on the radiomics-related data and an assistance algorithm, a function for processing the radiomics-related data; calculating, based on the radiomics-related data and the function for processing the radiomics-related data, the radiomics-related information; and providing the radiomics-related information. [0043]-[0047], [0070]) 1. receiving the medical image for an object; and / 11. a communication unit configured to receive a medical image for an object; 11. and a processor functionally connected to the communication unit, wherein the processor is configured to / 17. a communication unit configured to receive a medical image for an object; (Muehlberg: [0194] A short tutorial for the imaging technology and the correct study design can be shown in the graphical user interface. Then, to evaluate the added value of e.g. multispectral imaging, the assistance algorithm can automatically compare branches of single-energy images with branches of multispectral images with explanation of the procedure and why it is done. This leads to a deeper understanding of the procedure. An easy-to-publish and high-quality study can be therefore provided.) 1. harmonizing the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input, / 11. perform harmonization on the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input, / 17. a processor functionally connected to the communication unit, wherein the processor is configured to perform harmonization on the received medical image to acquire a filtered medical image by using a convolutional filter based on a predictive model trained to output a reconstructed image using the medical image as input (Muehlberg: [0084]-[0099], [0089] Most steps involve a hybrid approach: both rules and the data at hand are used to determine whether a particular statistical method or sequence of methods is suited or not. [0090] The assistance algorithm can be further based on techniques from meta learning, i.e. knowledge about learning procedures. [0091] In another embodiment, a first user interaction element for a graphical user interface is generated based on the radiomics-related data and the assistance algorithm, [0092] wherein a first user input is received based on the first user interaction element, [0093] wherein the function for processing the radiomics-related data is determined further based on the first user input. [0094] The first input can be related, for example, to at least one of the radiomics-related data, the function for processing the radiomics-related data or to the radiomics-related information. [0095] In another embodiment, a first set of candidate functions for processing the radiomics-related data is calculated based on the radiomics-related data and the assistance algorithm, [0096] wherein the first user interaction element is indicative of each candidate function of the first set of candidate functions for processing the radiomics-related data, [0097]wherein a first function of the first set of candidate functions is determined based on the first user input, [0098]wherein the function for processing the radiomics-related data is determined further based on the first function. [0099] The candidate function of a set of candidate functions, in particular of the first set of candidate functions, can differ from each other with respect to a class and/or with respect to parameters. Candidate function may be in a generalized form needing further parametrization before being applicable to the radiomics-related data. In particular, the function for processing the radiomics-related data can be a specific form of the first function.) 1. using the predictive model, generating a region of interest (ROI) in the filtered medical image, and / 11. and generate a region of interest in the filtered medical image / 17. wherein the convolutional filter corresponds to a first convolution layer of the predictive model, (Muehlberg: [0116] One example is that a multivariate model may have too large confidence intervals of the odd's ratios, which indicates overfitting. Therefore, the backtracking mechanism jumps back to select a more stringent multiple comparison correction or alter the feature selection to a “rule of 50” instead of a “rule of 10”. This modification can also be reflected in the publication draft. [0117] In another embodiment, the function for processing the radiomics-related data is configured for generating a statistical diagram based on the radiomics-related data, wherein the radiomics-related information comprises the statistical diagram. [0118] For example, the assistance algorithm can support clinical researchers doing statistical analysis, in particular not to make typical layman's errors when using automation tools. [0119] The function for processing the radiomics-related data can be configured for generating high-quality publication-ready figures (e.g., segmentations and boxplots). [0120] As the assistance algorithm can have access to the segmentations and results, it can automatically generate high-quality and paper-ready figures of the segmentations, the analyzed body region and the results. Visualizations may show univariate results by boxplots (quantiles, outliers) or Bland-Altman plots and multivariate models by the Receiver-Operating Characteristic (ROC) curve with confidence interval.) 1. extracting features based on the region of interest, wherein the convolutional filter corresponds to a first convolutional layer of the predictive model. / 11. and extract features based on the region of interest, wherein the convolutional filter corresponds to a first convolution layer of the predictive model./ 17. , wherein the predictive model is an artificial neural network model based on a masked autoencoder configured to perform self-supervised learning on unique features of the medical image. (Muehlberg: [0121] For survival analysis, Kaplan-Meier curves are automatically generated with a separation in predicted high- and low-risk curve. Also, a CONSORT diagram may be generated based on the Data Assessment done before. Statistics for the given data frame are extracted automatically, e.g. how many entries are missing for each respective feature analyzed. [0122] Also an appropriate data imputation method is selected for the data given. The semantic of the given label is derived automatically, e.g. a label named “survival” with continuous values induces a survival analysis with censored data. Automated, the data can be curated after best practices. [0123] In particular, the assistance algorithm is configured for curation and/or imputation of the radiomics-related data. Calculations can be based on the curated and/or imputated data. Muehlberg: [0171] Any of the algorithms mentioned herein, in particular the assistance algorithm and/or the function for processing the radiomics-related data, can be based on one or more of the following architectures: convolutional neural network, deep belief network, random forest, deep residual learning, deep reinforcement learning, recurrent neural network, Siamese network, generative adversarial network or auto-encoder. In particular, the trained machine learning algorithm can be embodied as a deep learning algorithm, in particular as a deep convolutional neural network.) It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify Dhaouadi’s method and system for automated cancer detection and diagnoisis using a pre-trained predictive model with the incorporation of Muehlberg’s radiomics-related data and assistance algorithm as they are both directed towards machine learning medical diagnostic models. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify Dhaouadi in order to incorporate the processing of radiomics-related data using CNN-based architecture for the medical machine learning algorithm described by Muehlberg. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Dhaouadi, while the teaching of Muehlberg continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of using a CNN-based architecture and ensuring the use of radiomics-related data. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Consider Claims 2 and 12. 2. / 12. (Canceled). Consider Claim 3 and 13. The combination of Dahouadi and Muehlberg teaches: 3. (Currently Amended) The method according to claim 1 [[2]], wherein the extracting of the features includes: overlaying the mask on the filtered medical image to acquire an overlaid medical image; and extracting features for the overlaid medical image. / 13. (Currently Amended) The device according to claim 11[[12]], wherein the processor is further configured to overlay the mask on the filtered medical image to acquire an overlaid medical image and extract features for the overlaid medical image. (Dahouadi: [0082] Moreover, the computer system may take the image volumes and may generate a report that summarizes the results of the computer-aided detection and diagnosis. This is shown in FIG. 7 , which presents a drawing illustrating an example of computer-aided report generation in an analysis pipeline. Notably, the computer system may calculate the approximate volume and dimensions of the prostate organ using the input prostate mask volume. Then, the computer system may detect regions of interest (ROIs) corresponding to regions with a high likelihood of cancer using the color overlay volume. Furthermore, the computer system may calculate the estimated volume and dimensions of a given ROI and may display or include them in the report. Next, the computer system may display or include them in the report the estimated sub-region (such as a peripheral zone or central gland) where an ROI occurs. For a given ROI, the computer system may use the ROI centroid to define 2D slice images for the T2W series, the ADC series, and the color overlay. In some embodiments, these three slice images may be displayed or included in one row of the report. Additionally, for the given ROI, the computer system may produce and display or include a 3D rendering of the prostate organ with the 3D rendering of the ROI in the report. If no ROIs are detected, then the computer system may indicate in the report that there were no ROIs. Note that the report may include slices of the color overlay corresponding to the mid-gland, apex, and base levels of the prostate. [0090] As discussed previously, the outputs of the analysis techniques may include: a segmentation of the whole prostate region, a segmentation of the central gland sub-region of the prostate, a color map overlay of 2D axial T2W images that represents cancer predictions. This information is shown in FIG. 13 , which presents drawings illustrating examples of an axial T2-weighted slice of the prostate before and after post-processing. In FIG. 13 , region 1310 indicates normal tissue (which may be represented by the color green), region 1312 indicates a region with the highest probability index (>0.62) of cancer (which may be represented by the color red), and the remaining region (1314) may include a region with a scaled probability of cancer (which may be represented by a color spectrum, with cool colors corresponding to the lowest probability of cancer). As described further below, note that the cancer predictions provided by the analysis techniques may be represented by a metric (which is sometimes referred to as a ‘ProstatID index’). Table 2 summarizes an example of a mapping from a probability index (such as the ProstatID index) to color.) Consider Claim 4 and 14. The combination of Dahouadi and Muehlberg teaches: 4. (Currently Amended) The method according to claim 1[[2]], wherein the extracting of the features includes:performing discretization of a grayscale based on pixel intensity of the region of interest; and extracting radiomics features and statistical features for the region of interest based on the discretization result. / 14. (Currently Amended) The device according to claim 11[[12]], wherein the processor is further configured to perform discretization of a grayscale based on pixel intensity of the region of interest, and extract radiomics features and statistical features for the region of interest based on the discretization result. (Dahouadi: [0058] Then, the computer system may compute features (operation 212) associated with voxels corresponding to a prostate of the individual based at least in part on the medical-imaging data, where, for a given voxel, the features include: intensity features, texture features, and/or a spatial feature corresponding to a distance from a peripheral zone of the prostate to a transition zone of the prostate. Note that the intensity features may include radiomics features and the texture features may include Haralick texture features (which may be a subset of the radiomics features). For example, the Haralick texture features may include correlation PNG media_image1.png 497 458 media_image1.png Greyscale [0059] In some embodiments, there may be 64 radiomics features associated with each pixel or voxel associated with the prostate, including: 4 intensity features and 13 texture features (such as smooth versus grainy), and the spatial feature may include a Euclidean distance to the central gland of the prostate (such as a distance to a border of the transition zone). Additionally, for the DWI series, the image channel associated with the highest b-value may be used for feature extraction. Muehlberg: [0071] The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. Shared processor hardware encompasses a single microprocessor that executes some or all code from multiple modules. Group processor hardware encompasses a microprocessor that, in combination with additional microprocessors, executes some or all code from one or more modules. References to multiple microprocessors encompass multiple microprocessors on discrete dies, multiple microprocessors on a single die, multiple cores of a single microprocessor, multiple threads of a single microprocessor, or a combination of the above. [0117] In another embodiment, the function for processing the radiomics-related data is configured for generating a statistical diagram based on the radiomics-related data, wherein the radiomics-related information comprises the statistical diagram. [0118] For example, the assistance algorithm can support clinical researchers doing statistical analysis, in particular not to make typical layman's errors when using automation tools. [0119] The function for processing the radiomics-related data can be configured for generating high-quality publication-ready figures (e.g., segmentations and boxplots). [0125]) Consider Claim 5 and 15. The combination of Dahouadi and Muehlberg teaches: 5. (Currently Amended) The method according to claim 1 [[2]], wherein the extracting of the features includes extracting handcrafted features of at least one of morphological features, texture features, and pixel histogram-based features based on the region of interest or the mask. / 15. (Currently Amended) The device according to claim 11[[12]], wherein the processor is further configured to extract handcrafted features of at least one of morphological features, texture features, and pixel histogram-based features based on the region of interest or the mask. (Dahaouadi: [0095] FIG. 16 presents a drawing illustrating an example of color-coded regions in a T2W image of the prostate corresponding to cancer predictions. Notably, FIG. 16 shows an example of a ProstatID color map for a patient with a Gleason 7 lesion in the transition zone of the prostate. In FIG. 16 , region 1610 (which may have a color of red according to Table 4) may indicate a region highly suspicious of cancer. Moreover, region 1612 may have a color of orange/red, region 1614 may have a color of yellow, and region 1616 may have a color of green. Note that the ProstatID index and the corresponding colors in the color map (e.g., in Tables 2 and 4) may have been determined from the ROC analysis of 150 evaluation cases, which is described further below. PNG media_image2.png 275 488 media_image2.png Greyscale [0096] In some embodiments, the ProstatID index may be used to assist a physician in or to automatically assign a PI-RADS score. For example, Table 4 also provides computer-generated recommendations for PI-RADS scoring based at least in part on: the ProstatID color-coded index, morphology, the size of the primary lesion (as measured in 2D and/or 3D), and/or whether there is extraprostatic extension/invasive behavior or other complicating factors. Muehlberg: [0118] For example, the assistance algorithm can support clinical researchers doing statistical analysis, in particular not to make typical layman's errors when using automation tools. [0119] The function for processing the radiomics-related data can be configured for generating high-quality publication-ready figures (e.g., segmentations and boxplots). [0120] As the assistance algorithm can have access to the segmentations and results, it can automatically generate high-quality and paper-ready figures of the segmentations, the analyzed body region and the results. Visualizations may show univariate results by boxplots (quantiles, outliers) or Bland-Altman plots and multivariate models by the Receiver-Operating Characteristic (ROC) curve with confidence interval. [0121] For survival analysis, Kaplan-Meier curves are automatically generated with a separation in predicted high- and low-risk curve. Also, a CONSORT diagram may be generated based on the Data Assessment done before. Statistics for the given data frame are extracted automatically, e.g. how many entries are missing for each respective feature analyzed.) Consider Claim 6 and 16. The combination of Dahouadi and Muehlberg teaches: 6. (Currently Amended) The method according to claim 1 [[2]], further comprising: after the extracting of the features, predicting whether a target disease occurs based on the features. 16. (Currently Amended) The device according to claim 11[[12]], wherein the processor is further configured to predict whether a target disease occurs based on the features. (Dahaouadi: [0095] FIG. 16 presents a drawing illustrating an example of color-coded regions in a T2W image of the prostate corresponding to cancer predictions. Notably, FIG. 16 shows an example of a ProstatID color map for a patient with a Gleason 7 lesion in the transition zone of the prostate. In FIG. 16 , region 1610 (which may have a color of red according to Table 4) may indicate a region highly suspicious of cancer. Moreover, region 1612 may have a color of orange/red, region 1614 may have a color of yellow, and region 1616 may have a color of green. Note that the ProstatID index and the corresponding colors in the color map (e.g., in Tables 2 and 4) may have been determined from the ROC analysis of 150 evaluation cases, which is described further below. PNG media_image2.png 275 488 media_image2.png Greyscale [0096] In some embodiments, the ProstatID index may be used to assist a physician in or to automatically assign a PI-RADS score. For example, Table 4 also provides computer-generated recommendations for PI-RADS scoring based at least in part on: the ProstatID color-coded index, morphology, the size of the primary lesion (as measured in 2D and/or 3D), and/or whether there is extraprostatic extension/invasive behavior or other complicating factors. Muehlberg: [0118] For example, the assistance algorithm can support clinical researchers doing statistical analysis, in particular not to make typical layman's errors when using automation tools. [0119] The function for processing the radiomics-related data can be configured for generating high-quality publication-ready figures (e.g., segmentations and boxplots). [0120] As the assistance algorithm can have access to the segmentations and results, it can automatically generate high-quality and paper-ready figures of the segmentations, the analyzed body region and the results. Visualizations may show univariate results by boxplots (quantiles, outliers) or Bland-Altman plots and multivariate models by the Receiver-Operating Characteristic (ROC) curve with confidence interval. [0121] For survival analysis, Kaplan-Meier curves are automatically generated with a separation in predicted high- and low-risk curve. Also, a CONSORT diagram may be generated based on the Data Assessment done before. Statistics for the given data frame are extracted automatically, e.g. how many entries are missing for each respective feature analyzed.) Consider Claim 7. The combination of Dahouadi and Muehlberg teaches: 7. (Original) The method according to claim 1, wherein the predictive model is an artificial neural network model based on a masked autoencoder configured to perform self- supervised learning of unique features of the medical image. (Dhaouadi: [0089] While the preceding discussion illustrated the analysis techniques with a BPRF model, more generally the analysis techniques may use a predictive model that is pretrained or predetermined using a machine-learning technique (such as a supervised learning technique, an unsupervised learning technique and/or a neural network) and a training dataset. For example, the predictive model may include a classifier or a regression model that was trained using: random forests, a support vector machine technique, a classification and regression tree technique, logistic regression, LASSO, linear regression, a neural network technique (such as deep learning, a convolutional neural network technique, an autoencoder neural network or another type of neural network technique), a boosting technique, a bagging technique, another ensemble learning technique and/or another linear or nonlinear supervised-learning technique. Muehlberg: [0171] Any of the algorithms mentioned herein, in particular the assistance algorithm and/or the function for processing the radiomics-related data, can be based on one or more of the following architectures: convolutional neural network, deep belief network, random forest, deep residual learning, deep reinforcement learning, recurrent neural network, Siamese network, generative adversarial network or auto-encoder. In particular, the trained machine learning algorithm can be embodied as a deep learning algorithm, in particular as a deep convolutional neural network.) Consider Claim 8 and 18. The combination of Dahouadi and Muehlberg teaches: 8. (Original) The method according to claim 1, wherein the first convolutional layer is configured to perform a filtering function of reducing noise of the input medical image by training a feature map divided into patch units for the input medical image. / 18. (Original) The device according to claim 11, wherein the first convolutional layer is configured to perform a filtering function of reducing noise of the input medical image by training a feature map divided into patch units for the input medical image. (Dahouadi: [0041] A computer system that analyzes medical-imaging data (e.g., MRI studies) to assess a risk for prostate cancer is described. The computer system may compute features (including intensity features, texture features and a spatial feature) based at least in part on the medical-imaging data. Then, using a pretrained predictive model, the computer system may determine cancer predictions on a voxel-by-voxel basis, based at least in part on the computed features. Note that the pretrained predictive model may include a BPRF model with a boosted ensemble of bagging ensemble models (such as classifiers), where a given bagging ensemble model includes an ensemble of random forests models. Next, the computer system may provide feedback based on the cancer predictions for the voxels. For a given voxel, the feedback may include a cancer prediction and a location. In some embodiments, for the given voxel, the feedback may include an aggressiveness of the predicted cancer, information associated with disease progression (such as a disease stage) and/or a recommended therapy (e.g., based at least in part on the aggressiveness and/or the disease stage). [0051]-[0052] In general, the pretrained predictive model may include a machine-learning model or a neural network, which may include or combine one or more convolutional layers, one or more residual layers and one or more dense or fully connected layers, and where a given node in a given layer in the given neural network may include an activation function, such as: a rectified linear activation function or ReLU, a leaky ReLU, an exponential linear unit or ELU activation function, a parametric ReLU, a tanh activation function, and/or a sigmoid activation function. As described further below with reference to FIG. 10 , in some embodiments the pretrained predictive model may include a pretrained BPRF model, and the pretrained BPRF model may include a boosted ensemble of bagging ensemble models (such as classifiers), where a given bagging ensemble model includes an ensemble of random forests models. Moreover, the boosted ensemble may be based at least in part on an adaptive boosting technique and the bagging ensemble may be based at least in part on a Bayesian estimator technique. In some embodiments, the boosted ensemble may be computed in sequentially and the bagging ensemble may be computed in parallel. Additionally, as described further below with reference to FIGS. 11-12 , the pretrained predictive model may have an improved receiver operator characteristic (ROC) and an improved free-response receiver operator characteristic (FROC) relative to other models or model architectures. [0065]-[0066] Muehlberg: [0170]-[0171] Any of the algorithms mentioned herein, in particular the assistance algorithm and/or the function for processing the radiomics-related data, can be based on one or more of the following architectures: convolutional neural network, deep belief network, random forest, deep residual learning, deep reinforcement learning, recurrent neural network, Siamese network, generative adversarial network or auto-encoder. In particular, the trained machine learning algorithm can be embodied as a deep learning algorithm, as a deep convolutional neural network.) Consider Claim 9 and 19. The combination of Dahouadi and Muehlberg teaches: 9. (Original) The method according to claim 1, wherein the medical image is at least one of an ultrasound image, an X-ray image, a computed tomography (CT) image, a magnetic resonance imaging (MRI) image, and an endoscopic image. 19. (Original) The device according to claim 11, wherein the medical image is at least one of an ultrasound image, an X-ray image, a CT image, an MRI image, and an endoscopic image. (Dahouadi: [0043] In the discussion that follows, the analysis techniques are used to analyze MRI data, such as T2W images, ADC images, and/or DWI images. However, the analysis techniques may be used to analyze a wide variety of types of MR images (which may or may not involve MRI, e.g., free-induction-decay measurements), such as: MRS with one or more types of nuclei, MR spectral imaging (MRSI), MR elastography (MRE), MR thermometry (MRT), magnetic-field relaxometry and/or another MR technique (e.g., functional MRI, metabolic imaging, molecular imaging, blood-flow imaging, diffusion-tensor imaging, etc.). More generally, the analysis techniques may be used to analyze measurement results from a wide variety of invasive and non-invasive imaging techniques, such as: X-ray measurements (such as X-ray imaging, X-ray diffraction or computed tomography at one or more wavelengths between 0.01 and 10 nm), neutron measurements (neutron diffraction), electron measurements (such as electron microscopy or electron spin resonance), optical measurements (such as optical imaging or optical spectroscopy that determines a complex index of refraction at one or more visible wavelengths between 300 and 800 nm or ultraviolet wavelengths between 10 and 400 nm), infrared measurements (such as infrared imaging or infrared spectroscopy that determines a complex index of refraction at one or more wavelengths between 700 nm and 1 mm), ultrasound measurements (such as ultrasound imaging in an ultrasound band of wavelengths between 0.2 and 1.9 mm), proton measurements (such as proton scattering), positron emission spectroscopy, positron emission tomography (PET), impedance measurements (such as electrical impedance at DC and/or an AC frequency) and/or susceptibility measurements (such as magnetic susceptibility at DC and/or an AC frequency). [0044] Muehlberg: [0161] Herewith, in at least one embodiment, a medical imaging device is disclosed, the medical imaging device comprising a data processing system for providing radiomics-related information according to one or more of the disclosed embodiments. The medical imaging device may be, for example, a computed tomography (CT) device or a magnetic resonance imaging (MRI) device or a combination of different medical imaging modalities, for example, a PET-CT-imaging device. The medical imaging data can be acquired, for example, by the medical imaging device. The medical imaging data can comprise, for example, computed tomography medical imaging data and/or magnetic resonance medical imaging data.) Consider Claim 10 and 20. The combination of Dahouadi and Muehlberg teaches: 10. (Original) The method according to claim 1, wherein the medical image is a cardiac ultrasound image. / 20. (Original) The device according to claim 11, wherein the medical image is a cardiac ultrasound image. (Dahouadi: [0043] In the discussion that follows, the analysis techniques are used to analyze MRI data, such as T2W images, ADC images, and/or DWI images. However, the analysis techniques may be used to analyze a wide variety of types of MR images (which may or may not involve MRI, e.g., free-induction-decay measurements), such as: MRS with one or more types of nuclei, MR spectral imaging (MRSI), MR elastography (MRE), MR thermometry (MRT), magnetic-field relaxometry and/or another MR technique (e.g., functional MRI, metabolic imaging, molecular imaging, blood-flow imaging, diffusion-tensor imaging, etc.). More generally, the analysis techniques may be used to analyze measurement results from a wide variety of invasive and non-invasive imaging techniques, such as: X-ray measurements (such as X-ray imaging, X-ray diffraction or computed tomography at one or more wavelengths between 0.01 and 10 nm), neutron measurements (neutron diffraction), electron measurements (such as electron microscopy or electron spin resonance), optical measurements (such as optical imaging or optical spectroscopy that determines a complex index of refraction at one or more visible wavelengths between 300 and 800 nm or ultraviolet wavelengths between 10 and 400 nm), infrared measurements (such as infrared imaging or infrared spectroscopy that determines a complex index of refraction at one or more wavelengths between 700 nm and 1 mm), ultrasound measurements (such as ultrasound imaging in an ultrasound band of wavelengths between 0.2 and 1.9 mm), proton measurements (such as proton scattering), positron emission spectroscopy, positron emission tomography (PET), impedance measurements (such as electrical impedance at DC and/or an AC frequency) and/or susceptibility measurements (such as magnetic susceptibility at DC and/or an AC frequency). [0044] Muehlberg: [0161] Herewith, in at least one embodiment, a medical imaging device is disclosed, the medical imaging device comprising a data processing system for providing radiomics-related information according to one or more of the disclosed embodiments. The medical imaging device may be, for example, a computed tomography (CT) device or a magnetic resonance imaging (MRI) device or a combination of different medical imaging modalities, for example, a PET-CT-imaging device. The medical imaging data can be acquired, for example, by the medical imaging device. The medical imaging data can comprise, for example, computed tomography medical imaging data and/or magnetic resonance medical imaging data.) Conclusion The prior art made of record in form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. PNG media_image3.png 196 894 media_image3.png Greyscale Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAHMINA ANSARI whose telephone number is 571-270-3379. The examiner can normally be reached on IFP Flex - Monday through Friday 9 to 5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’NEAL MISTRY can be reached on 313-446-4912. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. TC 2600’s customer service number is 571-272-2600. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600. 2674 /Tahmina Ansari/ February 21, 2026 /TAHMINA N ANSARI/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Oct 10, 2024
Application Filed
Dec 16, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586249
PROCESSING APPARATUS, PROCESSING METHOD, AND STORAGE MEDIUM FOR CALIBRATING AN IMAGE CAPTURE APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586354
TRAINING METHOD, APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR A MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 24, 2026
Patent 12573083
COMPUTER-READABLE RECORDING MEDIUM STORING OBJECT DETECTION PROGRAM, DEVICE, AND MACHINE LEARNING MODEL GENERATION METHOD OF TRAINING OBJECT DETECTION MODEL TO DETECT CATEGORY AND POSITION OF OBJECT
2y 5m to grant Granted Mar 10, 2026
Patent 12548297
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT BASED ON FEATURE AND DISTRIBUTION CORRELATION
2y 5m to grant Granted Feb 10, 2026
Patent 12524504
METHOD AND DATA PROCESSING SYSTEM FOR PROVIDING EXPLANATORY RADIOMICS-RELATED INFORMATION
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+17.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 868 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month