DETAILED ACTION
Claims 3 and 15 have been cancelled.
Claims 21-25 have been added.
Claims 1, 4-9 and 16-25 are currently pending.
Response to Arguments
Applicant's arguments filed 3/3/26 have been fully considered but they are not persuasive.
The Applicant argues on pages 14 of the response in essence that: However, Wang, like Lavazza, simply does not suggest … acquiring a weight map in which weight coefficients are mapped, the weight map indicating a probability distribution of presence of metal artifacts; and generating a composite image, by using the weight map indicating the probability distribution of the presence of metal artifacts, to composite (i) the machine learning output image data and (ii) the tomographic image data of the tomographic image of the object under examination …
Wang discloses that a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen (paragraph 87). The weighting mask is used to fuse the original image with the corrected image.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 21 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 21 recites the limitation "the weighted values" in line 5. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-7, 9, 16-19 and 21-25 are rejected under 35 U.S.C. 103 as being unpatentable over Favazza et al. US Publication 2024/0135603 (hereafter “Favazza”), Wang et al. WO Publication 2017/063569 (hereafter “Wang”) and Zhang et al. “Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography,” in IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1370-1381, June 2018 (hereafter “Zhang”).
Referring to claim 1, Favazza discloses a medical image processing apparatus comprising:
a processor and a non-transitory storage medium storing one or more programs and data for executing the programs (paragraph 29, A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system, as indicated at step 104), the programs being instructions executable by the processor to configure said medical image processing apparatus to perform a method comprising:
obtaining tomographic image data by reconstructing a tomographic image from projection data of an object under examination including a metal, the reconstructed tomographic image including metal artifacts (paragraph 68-69, The method includes accessing CT image data with a computer system, as indicated at step 602. In general, the CT image data contain images of a subject that have been acquired with a CT imaging system. In addition to depicting the subject's anatomy, the images also depict the presence of metal objects and the corresponding metal object artifacts),
obtaining machine learning output by acquiring a machine learning output image that includes reduced metal artifacts and is output when the tomographic image data is input to a machine learning engine that has machine-learned to reduce metal artifacts in the tomographic image of the object under examination (paragraph 72, The CT image data are then input to the first trained neural network, generating output as anatomy-only image data and artifact containing metal object image data, as indicated at step 606. For example, the anatomy-only image data may include CT images that depict substantially only the subject's anatomy (i.e., with the metal object and corresponding artifacts removed or otherwise significantly reduced)); and
generating a composite image, based on the beam hardening correction image, to composite (i) the machine learning output image data and (ii) the tomographic image data of the tomographic image of the object under examination (paragraph 77, The anatomy-only image data output from the first neural network and the metal object-only image data output from the second neural network are then combined to generated anatomy and metal object containing image data, as indicated at step 612), and
wherein for the machine learning output image data and the tomographic image that are composited, an image quality of a region of the machine learning output image that is less affected by the metal artifacts than a corresponding region of the tomographic image that includes the metal artifacts is preserved relative to an image quality of a corresponding region in the tomographic image less affected by the metal artifacts, and an image quality of a counterpart region of the composite image corresponding to the region of the machine learning output image and the corresponding region in the tomographic image is preserved relative to an image quality of the corresponding region in the tomographic image which is composited with the machine learning output image (paragraph 77, The anatomy-only image data output from the first neural network and the metal object-only image data output from the second neural network are then combined to generated anatomy and metal object containing image data, as indicated at step 612).
Favazza does not disclose expressly compositing the images using a weight map.
Wang discloses acquiring a weight map based on a beam hardening correction image obtained by applying beam hardening correction to the tomographic image (paragraph 140, The streak artifact may result from beam hardening), weight coefficients of the weight map being mapped, and the weight map indicating a probability distribution of a presence of metal artifacts (paragraph 87, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen); and
compositing generating a composite image, by using the weight map based on the beam hardening image (paragraph 50, after metal object insertion, the projection data are also corrected for increased beam hardening. In one example, the beam hardening model can be predicted on parameters derived from the forward projections of the digital metal model and the original projections decoded from original CT raw data), to composite (i) the machine learning output image data and (ii) the tomographic image data of the tomographic image of the object under examination (paragraph 87, the compensation may be performed by fusing a high frequency part of the original image, and/or a high frequency part of the corrected image, and/or a low frequency part of the corrected image. In some embodiments, a weighing mask may be used on the original image).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to composites images using a weight map. The motivation for doing so would have been to improve the appearance of a combined image by accurately blending pixels from both images.
Favazza does not disclose expressly that an image quality of the region less affected by the metal artifacts is degraded.
Zhang discloses an image quality of a region of the machine learning output image that is less affected by the metal artifacts than a corresponding region of the tomographic image that includes the metal artifacts is degraded relative to an image quality of a corresponding region in the tomographic image less affected by the metal artifacts, and an image quality of a counterpart region of the composite image corresponding to the region of the machine learning output image and the corresponding region in the tomographic image is preserved relative to an image quality of the corresponding region in the tomographic image which is composited with the machine learning output image (page 1376, Due to the excellent image quality of the CNN image, a good CNN prior is generated, followed by a CNN-MAR image with superior image quality. It is clearly seen from Fig. 7(h) that the artifacts are almost removed completely and the tissue features in the vicinity of metals are faithfully preserved).
The Applicant’s Specification explains that in the process taught by Zhang degradation occurs in a region less affected by the metal artifacts (paragraph 5, However, in the above literature, although the metal artifacts are reduced, a degradation in image quality may be caused in a region less affected by the metal artifacts, for example, in a region away from the metal).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to utilize a process by which an image quality of the region less affected by the metal artifacts is degraded. The motivation for doing so would have been to improve the image quality of the region affected by metal artifacts by better removing the metal artifacts. Therefore, it would have been obvious to combine Wang and Zhang with Favazza to obtain the invention as specified in claim 1.
Referring to claims 4 and 16, Wang discloses wherein the weight map indicates a distribution of absolute values of differences between the tomographic image and a linear interpolation image that is obtained by applying a linear interpolation technique to the tomographic image (paragraph 95, In step 728, an interpolation may be performed based on the projection region of the artifact (s) in the projection domain. In some embodiments, the interpolation may be performed in the projection data of the original image, or in the difference between the projection data of the original image and that of the metal image. In some embodiments, the interpolation method may include linear interpolation).
Referring to claims 5 and 17, Wang discloses wherein the weight coefficients become smaller with an increasing distance from a metal pixel extracted from the tomographic image (paragraph 87, In some embodiments, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen).
Referring to claims 6 and 18, Wang discloses wherein the weight coefficient becomes larger as the metal pixel has a larger pixel value (paragraph 91, If the CT value of a pixel in the original image exceeds the segmentation threshold Tmetal, the pixel may be determined as a metal pixel in the metal image. Besides the metal pixels, CT values of other pixels in the metal image may be set as 0).
Referring to claims 7 and 19, Favazza discloses wherein the machine learning output image and the tomographic image are composited together (paragraph 77, The anatomy-only image data output from the first neural network and the metal object-only image data output from the second neural network are then combined to generated anatomy and metal object containing image data, as indicated at step 612).
Wang discloses wherein the output image and the tomographic image are composited together by using a value obtained by multiplying the weight coefficient by an adjustment coefficient set in an adjustment coefficient setting portion (paragraph 102, Merely by way of example, for an original image including metal artifact(s), the weighting coefficients of the projection data of the pre-corrected image may be adjusted according to a weighting intensity).
Referring to claim 9, Favazza discloses a medical image processing method, comprising the steps of:
obtaining tomographic image data by reconstructing a tomographic image from projection data of an object under examination including a metal (paragraph 68-69, The method includes accessing CT image data with a computer system, as indicated at step 602. In general, the CT image data contain images of a subject that have been acquired with a CT imaging system. In addition to depicting the subject's anatomy, the images also depict the presence of metal objects and the corresponding metal object artifacts),
obtaining machine learning output by acquiring a machine learning output image that includes reduced metal artifacts and is output when the tomographic image data is input to a machine learning engine that has machine-learned to reduce metal artifacts (paragraph 72, The CT image data are then input to the first trained neural network, generating output as anatomy-only image data and artifact containing metal object image data, as indicated at step 606. For example, the anatomy-only image data may include CT images that depict substantially only the subject's anatomy (i.e., with the metal object and corresponding artifacts removed or otherwise significantly reduced)); and
generating a composite image to composite the machine learning output image data and the tomographic image data (paragraph 77, The anatomy-only image data output from the first neural network and the metal object-only image data output from the second neural network are then combined to generated anatomy and metal object containing image data, as indicated at step 612), and
wherein for the machine learning output image data and the tomographic image data that are composited, an image quality of a region of the machine learning output image that is less affected by the metal artifacts than a corresponding region of the tomographic image that includes the metal artifacts is preserved relative to an image quality of a corresponding region in the tomographic image less affected by the metal artifacts, and an image quality of a counterpart region of the composite image corresponding to the region of the machine learning output image and the corresponding region in the tomographic image is preserved relative to an image quality of the corresponding region in the tomographic image which is composited with the machine learning output image (paragraph 77, The anatomy-only image data output from the first neural network and the metal object-only image data output from the second neural network are then combined to generated anatomy and metal object containing image data, as indicated at step 612).
Favazza does not disclose expressly compositing the images using a weight map.
Wang discloses acquiring a weight map in which weight coefficients are mapped, the weight map indicated the presence of metal artifacts (paragraph 87, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen), to composite the machine learning output image data and the tomographic image data (paragraph 87, the compensation may be performed by fusing a high frequency part of the original image, and/or a high frequency part of the corrected image, and/or a low frequency part of the corrected image. In some embodiments, a weighing mask may be used on the original image).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to composites images using a weight map. The motivation for doing so would have been to improve the appearance of a combined image by accurately blending pixels from both images.
Favazza does not disclose expressly that an image quality of the region less affected by the metal artifacts is degraded.
Zhang discloses an image quality of a region of the machine learning output image that is less affected by the metal artifacts than a corresponding region of the tomographic image that includes the metal artifacts is degraded relative to an image quality of a corresponding region in the tomographic image less affected by the metal artifacts, and an image quality of a counterpart region of the composite image corresponding to the region of the machine learning output image and the corresponding region in the tomographic image is preserved relative to an image quality of the corresponding region in the tomographic image which is composited with the machine learning output image (page 1376, Due to the excellent image quality of the CNN image, a good CNN prior is generated, followed by a CNN-MAR image with superior image quality. It is clearly seen from Fig. 7(h) that the artifacts are almost removed completely and the tissue features in the vicinity of metals are faithfully preserved).
The Applicant’s Specification explains that in the process taught by Zhang degradation occurs in a region less affected by the metal artifacts (paragraph 5, However, in the above literature, although the metal artifacts are reduced, a degradation in image quality may be caused in a region less affected by the metal artifacts, for example, in a region away from the metal).
At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to utilize a process by which an image quality of the region less affected by the metal artifacts is degraded. The motivation for doing so would have been to improve the image quality of the region affected by metal artifacts by better removing the metal artifacts. Therefore, it would have been obvious to combine Wang and Zhang with Favazza to obtain the invention as specified in claim 9.
Referring to claim 21, Wang discloses wherein each weight coefficient W is a real number between 0 and 1 (paragraph 87, In some embodiments, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen), and
the composite image is generated by compositing the machine learning output image data and the tomographic image data, by applying per-pixel weighting by W and (1-w) and combining the weighted values (paragraph 87, the compensation may be performed by fusing a high frequency part of the original image, and/or a high frequency part of the corrected image, and/or a low frequency part of the corrected image. In some embodiments, a weighing mask may be used on the original image).
Referring to claim 22, Wang discloses wherein the weight map is acquired based on a beam hardening correction image that is obtained by applying a beam hardening correction to the tomographic image (paragraph 140, The streak artifact may result from beam hardening).
Referring to claim 23, Wang discloses wherein the weight map reflects a distribution of absolute values of differences between the tomographic image and a beam hardening correction image that is obtained by applying a beam hardening correction to the tomographic image (paragraph 87, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen)
Referring to claim 24, Wang discloses wherein each weight coefficient W is a real number between 0 and 1, and for each pixel of the composite image (paragraph 87, In some embodiments, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen),
(a) a pixel value of the machine learning output image is weighted by W,
(b) a pixel value of the tomographic image is weighted by (1-w), and
(c) the weighted pixel value of the machine learning output image and the weighted pixel value of the tomographic image are combined to obtain a pixel value of the composite image (paragraph 87, the compensation may be performed by fusing a high frequency part of the original image, and/or a high frequency part of the corrected image, and/or a low frequency part of the corrected image. In some embodiments, a weighing mask may be used on the original image).
Referring to claim 25, Wang discloses wherein the weight map reflects a distribution of absolute values of differences between the tomographic image and the beam hardening correction image (paragraph 87, a weighing mask may be used on the original image. The weighting mask may include weighting coefficients for various pixels in the original image. For example, for a pixel close to a metal artifact, a relatively large weighting coefficient (e.g., close to 1) may be chosen; for a pixel distant to a metal artifact, a relatively small weighting coefficient (e.g., transitioning from 1 to 0) may be chosen)
Allowable Subject Matter
Claims 8 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER K HUNTSINGER/Primary Examiner, Art Unit 2682