DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 and 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abdolell (WO 2020102914 A1) in view of Hartkens (EP 3680912 A1), and further in view of Demesmaeker (PGPUB: 20200027251 A1).
Regarding claim 1. Abdolell teaches a method for an image quality assessment system, the method comprising:
receiving a selection of a medical image from a user of the image quality assessment system (see paragraph 64, receiving a study input indicating the study selected by the user; accessing the database to retrieve image parameter feature scores and an overall image quality score for each image of the selected study; and displaying images of the selected study along with the image parameter feature scores and the overall image quality score for each image of the study);
generating an image quality score for the selected medical image, the image quality score generated using a trained machine learning (ML) model (see paragraph 141, the MRT workstation 106 may also display a plurality of image quality parameters for at least one image in a study, and a plurality of predicted image quality parameter scores. The scores are predicted in the sense that a predictive model is used to determine the score and whereby covariates are input into the predictive model and the predictive model predicts the probability of the event (which is the presence of an image quality error amongst the plurality of image quality errors));
displaying the selected medical image and the image quality score in a graphical user interface (GUI) on a display device of the image quality assessment system (see paragraph 64, an overall study quality score for the selected study on the graphical user interface on the display device2);
receiving an adjusted image quality score of the medical image from the user via the GUI (see paragraph 213, one or more configurable predicted image quality parameter feature thresholds may be shown on a user interface, and may be adjusted by the user based on user input. The embodiment may further include an interface with two or more medical images, and the adjustment of the configurable predicted image quality parameter feature thresholds may adjust the display of the two or more medical images into a positive set where an image is predicted to have a particular image quality feature, and a negative set where an image is predicted not to have a particular image quality feature); and
using the image quality score to retrain the ML model (see Fig. 5 and 8, paragraph 246-250, At act 802, a medical image is acquired such as a mammographic image by a medical image quality system, such as medical image quality system 100. At act 804, the medical image quality system may determine a predicted image quality and image quality parameter score. At act 806, the medical image quality system may provide feedback to the MRT at the MRT workstation 106. A predicted plurality of image and study quality parameter scores and indices (IQPS/SQPS, IQPI/SQPI, IQS/SQS, IQI/SQI) may also be provided as feedback to the MRT at the MRT workstation 106; At act 810, which is optional, the plurality of image quality parameter features and image quality scores may be used along with the mammographic image in order to determine an expert assessment, perhaps by the system administrator. The plurality of image quality parameter features and the image quality scores along with the expert assessment may then be used a new training data for model retraining to periodically update the predictive model at act 812 by following the training method of FIG. 5 using the new training data).
However, Abdolell does not expressly teach adjusted image.
Hartkens teaches that the result of the image quality level assessment may be displayed by the assessment component to a user (e.g., an on-site technician) with an indication of the predicted satisfaction level, so that the user may know whether the physician in charge of the examination will prospectively be satisfied with the acquired medical image or not. When the physician will likely be satisfied (e.g., when the determined level of image quality exceeds a predetermined threshold), the acquired medical image may be provided to the physician for review. This may comprise transmitting the acquired medical image to a workstation of the physician (in particular, when the physician works from a remote location). On the other hand, when the physician will likely not be satisfied (e.g., when the determined level of image quality does not exceed the predetermined threshold), the acquisition of the medical image may be re-run with an adjusted configuration of the medical imaging device within the same examination, i.e., without the patient having to leave the scanner. When acquiring the medical image is performed as part of an examination, the method may thus further comprise repeating acquiring the medical image by the medical imaging device with a different configuration within the (same) examination (see paragraph 8). Upon receipt of the acquired medical image for review, the physician may rate the medical image using a feedback component provided in the medical imaging system to provide feedback on the quality level of the medical image as perceived by the physician. Such feedback may systematically be collected and incorporated into the collection of feedback of physicians for previously acquired medical images mentioned above (see paragraph 9)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Abdolell by Hartkens for providing when the physician will likely not be satisfied (e.g., when the determined level of image quality does not exceed the predetermined threshold), the acquisition of the medical image may be re-run with an adjusted configuration of the medical imaging device within the same examination, i.e., without the patient having to leave the scanner. When acquiring the medical image is performed as part of an examination, the method may thus further comprise repeating acquiring the medical image by the medical imaging device with a different configuration within the (same) examination, as adjust image. Therefore, combining the elements from prior arts according to known methods and technique, such as the acquisition of the medical image may be re-run with an adjusted configuration of the medical imaging device, would yield predictable results.
However, the combination does not expressly teach the image quality score generated using a trained machine learning (ML) model by computing a plurality of low-level metrics that evaluate image quality in one or more slices of the medical image and inputting numerical results of the computed low-level metrics to the trained ML model.
Demesmaeker teaches that a quality scorer is configured to apply a trained neutral network to the medical image to generate a quality score of the medical image (see paragraph 10); when the neural network gives a quality score to the image estimate at or above a threshold value, the reconstruction may stop further iterations and output the image. If the quality score is below the threshold, one or more further reconstruction iterations may be performed to increase image quality (see paragraph 36); the neural network may be trained by a machine using example images with associated image quality values. Once trained, the neural network is able to produce a quality score based on an input image that the neural network was not trained on (see paragraph 37); the example images may be two-dimensional or three-dimensional representations of the image data. For example, the example images may be a series of two-dimensional slices or patches oriented in one or more orientations taken from a three-dimensional imaging volume represented by the image data (see paragraph 39); the neural network is applied to the second medical image to generate a second quality score. The second quality score may be scored using the same scale as the first quality score to aid in comparing the image quality values. For example, both the first and second image quality scores may be scored on a scale of 0.0 to 5.0. Other scales may be used (see paragraph 66).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Demesmaeker to obtain when the neural network gives a quality score to the image estimate at or above a threshold value, the reconstruction may stop further iterations and output the image. If the quality score is below the threshold, one or more further reconstruction iterations may be performed to increase image quality, a quality scorer is configured to apply a trained neutral network to the medical image to generate a quality score of the medical image; the example images may be a series of two-dimensional slices or patches oriented in one or more orientations taken from a three-dimensional imaging volume represented by the image data; both the first and second image quality scores may be scored on a scale of 0.0 to 5.0, in order to provide the image quality score generated using a trained machine learning (ML) model by computing a plurality of low-level metrics that evaluate image quality in one or more slices of the medical image and inputting numerical results of the computed low-level metrics to the trained ML model. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Regarding claim 2. The combination teaches the method of claim 1, wherein generating the image quality score is performed without processing individual components of the medical image pixel-by-pixel (see Hartkens, paragraph 68, receiving at least one filter input from the user; accessing a database to retrieve images and a plurality of image parameter feature scores for each image, where the retrieved images satisfy the at least one filter input; displaying a given icon for a given image parameter feature that corresponds to the image parameter feature score on the graphical user interface on the display device; comparing the given image parameter feature score for the retrieved images to a threshold associated with the given image parameter feature to determine a given number of the retrieved images that satisfy the threshold; displaying the number of retrieved images on the graphical user interface on the display device).
Regarding claim 3. The combination teaches the method of claim 1, wherein the medical image is ingested at the image quality assessment system from a Digital Imaging and Communications in Medicine (DICOM) file (see Abdolell, paragraph 10, once selected, the set of acquisition parameters stored in association with the selected reference image may be applied by a configuration component to the medical imaging device to effectively configure the medical imaging device. If the composition component is a separate component, the selected reference image may be transferred to the medical imaging device, e.g., using the digital imaging and communications in medicine (DICOM) standard, and, once transferred, the set of acquisition parameters stored in association with the selected reference image (e.g., stored together with the reference image in a DICOM file) may be applied to the medical imaging device).
Regarding claim 4. The combination teaches the method of claim 3, wherein ingesting the DICOM file further comprises extracting (see Abdolell, paragraph 172, the network structure of the CNN predictive model 5100 allows for features to be extracted from the images and used (e.g. combined with hand crafted features) to learn different quality metrics from the dataset.) and aggregating metadata of the DICOM file by level (see Abdolell, paragraph 205, a medical image is received at the processor and the medical image is associated with a plurality of image metadata. The plurality of image metadata may be DICOM metadata, or other data in other formats as explained previously. The medical image may be preprocessed to determine a plurality of image quality parameter features).
Regarding claim 9. The combination teaches the method of claim 1, wherein the image quality score for the selected medical image corresponds to a region of interest (ROI) of the selected medical image (see Hartkens, paragraph 14, the plurality of reference images may be categorized by at least one of medical imaging device types, standard image sets and/or physician specific image sets, body regions, and anatomical parts. The central repository may thus be organized in categories to provide support for different types of imaging devices and to facilitate the retrieval of reference images. Among the above-mentioned categories, the category "medical imaging device type" may indicate for a reference image a type of medical imaging device on which the reference image has been acquired, the category "standard image set" may indicate for a reference image that the reference image is representative of a generally recommended set of acquisition parameters (e.g., independently from personal preferences of a particular physician), the category "physician specific image set" may indicate for a reference image that the reference image is representative of a set of acquisition parameters preferred by a particular physician, the category "body region" may indicate for a reference image a body region for which the reference image is representative),
the ROI defined by boundaries superimposed on the selected medical image in the GUI, the boundaries repositionable by the user (see Hartkens, paragraph 11, each of the plurality of reference images may previously be selected as being representative of a desired level of image quality for a particular type of medical image. In other words, the plurality of reference images may correspond to images which are selected to be of "good" quality for particular types of medical images. The particular type of a medical image may relate to a particular body region or a particular anatomical part, for example. The plurality of reference images may be selected (or "exported") from a picture archiving and communication system (PACS), which may store a collection of previously acquired medical images. The selection of the plurality of reference images from the PACS may be performed by users with medical expertise, such as physicians (e.g., radiologists) or other medical personnel).
Regarding claim 10. The combination teaches the method of claim 9, wherein: in response to the user repositioning one or more of the boundaries to define a second portion of the selected medical image: displaying an adjusted image quality score in the GUI, the adjusted image quality score corresponding to the second portion of the medical image (see Abdolell, Fig. 15, paragraph 305, a user interface 1500 showing image quality parameter scores for image quality parameters 1502 for an individual image. User interface 1500 may be referred to as an image level view and may be generated by the GUI engine 233. A user may select one of the images from those in a study by selecting one of the thumbnail images 1504. Each thumbnail image represents a different mammographic view for a study on one patient taken at a particular time. When the GUI engine 233 receives a user input for selecting one of the thumbnail images, the GUI engine 233 displays a larger version of the selected image such as image 1505. The GUI engine 233 also then displays the scores for a plurality of IQPs 1502).
Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abdolell (WO 2020102914 A1) in view of Hartkens (EP 3680912 A1), in view of Demesmaeker (PGPUB: 20200027251 A1), and further in view of Boddington (PGPUB: US 20220265233 A1).
Regarding claim 5. The combination teaches the method of claim 1, wherein generating the image quality score for the medical image further comprises
associating one or more slices of the medical image with an anatomical region of the one or more anatomical regions (see Hartkens, paragraph 12, the medical images acquired by the medical imaging device may correspond to MR images, CT images, X-ray images, etc. accordingly. Acquisition parameters may generally be used to configure the medical imaging device for a particular acquisition to be performed. As a mere example, typical acquisition parameters applied to an MR scanner may comprise definitions of a repetition time, an echo time, a number of signal averages, a slice thickness, an acquisition plane and a field of view, for example).
However, the combination does not expressly teach automatically detecting one or more anatomical regions in the medical image.
Boddington teaches that module 7 is an image annotation module that includes image processing algorithms or advanced Deep learning-based techniques for detecting anatomical structures in a medical image and identifying contours or boundaries of anatomical objects in a medical image, such as bone or soft tissue boundaries. Anatomical Landmark detection stands for the identification of key elements of an anatomical body part that potentially have a high level of similarity with the same anatomical body part of other patients (see paragraph 96).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Boddington for providing advanced Deep learning-based techniques for detecting anatomical structures in a medical image and identifying contours or boundaries of anatomical objects in a medical image, such as bone or soft tissue boundaries. Anatomical Landmark detection stands for the identification of key elements of an anatomical body part that potentially have a high level of similarity with the same anatomical body part of other patients, as automatically detecting one or more anatomical regions in the medical image. Therefore, combining the elements from prior arts according to known methods and technique, such as Deep learning-based techniques for detecting anatomical structures in a medical image and identifying contours or boundaries of anatomical objects in a medical image, such as bone or soft tissue boundaries, would yield predictable results.
Regarding claim 6. The combination teaches the method of claim 5, wherein generating the image quality score for the medical image further comprises generating a respective individual image quality score for each slice of the one or more slices based on the numeric results of the low-level metrics and aggregating the individual image quality scores to generate the image quality score (see Fig. 3 and 4, paragraph 48, 50, 71, the image data in the slices 309 may be reconstructed into one or more images 303; The patches 305 may be input for quality scoring by a neural network 307. Alternatively, the neural network 307 may accept the whole images 303 as input for quality scoring. The neural network may be taught by a machine based on a collection of example training images and associated; where the change in the image quality scores is below the threshold, the second medical image is output in act S321. A small change in the image quality score over reconstruction iterations may indicate that the reconstructed images are similar. Because further reconstruction iterations are unlikely to further improve the image quality significantly, the latest (e.g. the second) image may be output when the change in the quality score is below the threshold. Additionally or alternatively, the threshold may have a minimum number of iterations. For example, if the change in the quality score has been below the threshold for more than one iteration, the newest medical image may be output).
Regarding claim 7. The combination teaches the method of claim 6, wherein the plurality of low-level metrics (see Demesmaeker, Fig. 1, paragraph 36, the neural network forms all or part of the exit criteria evaluated in act S111. In some cases, the quality score of the estimated image is compared to a threshold. For example, when the neural network gives a quality score to the image estimate at or above a threshold value, the reconstruction may stop further iterations and output the image. If the quality score is below the threshold, one or more further reconstruction iterations may be performed to increase image quality. In some other cases, the change in the quality score over one or more previous iterations may form all or part of the exit criteria) include at least one of:
a signal-to-noise ratio (SNR) a total amount of noise; a noise power spectrum (NPS); and a contrast-to-noise ratio (CNR) (see Hartkens, paragraph 33, these measurements comprise assessments of the signal-to-noise ratio, contrast-to-noise ratio).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN JIA whose telephone number is (571)270-5536. The examiner can normally be reached 9:00 am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIN JIA/Primary Examiner, Art Unit 2663