DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
The amendments to claims 6 and 14 have been accepted and entered.
New claims 21 and 22 are accepted and entered.
Claims 1-2, 5-10, 13-18, and 20-22 are pending regarding this application.
Response to Arguments
Applicant’s Remarks, filed 01/23/2026, in regards to the 35 U.S.C. § 103 rejection have been fully considered and are not persuasive.
Regarding claim 1, applicant argues
Ghani's training pairs do not constitute images of any kind, as they comprise sinograms. This distinction is more important than it might appear, as Ghani's contribution lies in performing its data processing in sinogram space, rather than in image space. As stated on page 181, right column, "... this paper aims to reduce metal artifacts in CT images by applying deep learning (DL) directly in the projection-domain, prior to image formation." Ghani's technique is otherwise largely conventional. More fundamentally, however, the images corresponding to the 'reference completed sinograms' are not "free of both the foreign material and artefacts due to the foreign material" as claimed. Those sinograms are formed by forward projection of a simulated metal-free image into sinogram space, followed by deletion of data corresponding to "metallic object areas". This process means that the "corresponding sinograms without the metal objects" are incomplete, as all image data in the locations of the simulated metal are deleted. The result is not, therefore, an image that predicts an original image but with metal reduced, but rather an original image in with metal and subject data has been merely excised.
However, the examiner maintains that Souza in view of Ghani teaches one or more images free of both the foreign material and of artefacts. Ghani’s teaching of sinograms are explicitly used to generate artifact free images as shown in Section II (A). Additionally, Ghani specifically refers to the sinograms used in the embodiment of the reference as a sinogram image in Section II (B) and further states that the sinograms used during training are generated from images in Section IV (A)(1). As such, it is clear that Ghani’s teaching of sinograms in the context of imaging and utilizing images in combination with Souza’s teaching of the images can appropriately cover the scope of the predicted images as taught by Ghani in the 103 rejection below.
Applicant additionally argues that, since subject data is lost during the metal-contaminated data deletion process as taught by Ghani, Ghani fails to teach the removal of artefacts by the ML model. However, examiner maintains that Ghani teaches this removal. Ghani teaches a process of utilizing the deep-MAR framework to remove all the metal from a sinogram in Section II (A). This section explicitly states the removal of metal-contaminated data which inherently includes artefacts. The teaching of artefact removal is further shown by the very fact that the purpose of the experiment is to reduce artefacts as noted numerous times in Section IV. Subject data loss during the metal-contaminated data deletion process does not imply that foreign objects and artefacts created by the foreign objects are not still removed during this process.
Furthermore, applicant argues that within “Ghani's reference completed sinograms' are formed by forward projection into sinogram space of a simulated metal-free image, not of a real image as claimed”. However, examiner believes that Souza in view of Ghani teaches the predicted images of claim 1 which are generated from "at least the first simulated images", which in turn are generated "from one or more real or simulated images of the foreign material, and from one or more real images of one or more subjects that are free of the foreign material and of artefacts due to the foreign material". As shown below in the 103 rejection of claim 1, Souza teaches the inputting predicted images which are based off a simulated image (which are further based off of real images). Therefore, it is not necessary that Ghani teach the exact claim language, as Souza in view of Ghani teaches the pipeline as claimed. However, it should be noted that Ghani additionally teaches generating the predicted images using a real data training set (see Section IV (A)(2) and Section III (B)).
Lastly, applicant’s remarks recite that “Ghani does not input into the GAN images free of both foreign material and artefacts, but rather sinograms from which foreign material and subject data have been excised. No mention is made in the claims of inputting images free of both foreign material and artefacts due to the foreign material”. As previously mentioned, all metal affected data is removed from the sinogram images as shown in Section I (emphasis added). Examiner believes that the scope of “images free of both foreign material and artefacts” can be interpreted broadly in order to be considered equivalent to sinogram images wherein all metal affected data (which includes the metal itself and the artefacts caused by the metal) is removed.
See the updated rejection of independent claims 1 and 9 in the 103 rejection below.
Regarding new claims 21 and 22, see the 103 Rejection of the claims below and the inclusion of Al-Hashmy et al. (U.S. Publication No. 20220018811 A1) which is used to teach the subject matter of the aforementioned claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5-10, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Souza et al. (U.S. Publication No. 2022/0240879 A1), hereinafter Souza in view of Ghani et al. (“Fast Enhanced CT Metal Artifact Reduction Using Data Domain Deep Learning”), hereinafter Ghani.
Regarding claim 1, Souza teaches a method for training a machine learning model for reducing or removing at least one foreign material or artefacts due to the foreign material from an image (Souza, see para. [0014]; FIG. 5), the method comprising:
generating one or more first simulated images from one or more real or simulated images of the foreign material (Souza “generating a first simulated image based on the image and the CAD model” para. [0014]; the CAD model represents the real image of the foreign material), and from one or more real images of one or more subjects that are free of the foreign material and of artefacts due to the foreign material (Souza teaches that real images “with and without surgical implants or surgical instruments may be used” para. [0048]), such that the generated simulated images include the foreign material and artefacts due to the foreign material (Souza “the first simulated image depicting the surgical implant and the anatomical portion of the subject; modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image corresponding to the first simulated image” para. [0014]);
generating one or more predicted images employing at least the first simulated images with a machine learning network that implements a machine learning model (Souza “providing… the first simulated image to the neural network as an example output” para. [0014]); and
training or updating the machine learning model with the machine learning network by reducing or minimizing a difference between the one or more predicted images and ground truth data comprising one or more real or simulated images (Souza “the FBP reconstruction of the up-sampled sinogram as well as the further-processed versions of the simulated images may be provided to the neural network as example inputs, and the FBP reconstructions of normal-dose sinograms as well as the original simulated images (showing the profile of the implant, without any simulated artifacts) are provided to the neural network as example “ground truth” outputs” para. [0070]);
the ground truth data comprises one or more real images of one or more subjects that are free of the foreign material and of artefacts due to the foreign material (Souza “The output may then be compared to a “ground truth” image depicting a desired image output” … “cadaver images of the lumbar and thoracic spine with and without surgical implants or surgical instruments may be used” para. [0048]), and
the machine learning model is configured to reduce or remove artefacts due to a foreign material from an image (Souza, “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]).
Souza further discloses the cited “discriminator network” as “a neural network (which may be, for example, a convolutional neural network, a recurrent neural network, or a generative adversarial neural network)” para. [0068], and teaches that the network is “configured to discriminate between the one or more predicted images and one or more real images” in para. [0068] through the following citation: The “a neural network (which may be, for example, a convolutional neural network, a recurrent neural network, or a generative adversarial neural network) may be provided a plurality of corresponding normal-dose sinograms as training labels or “ground truth” data”. The act of taking the normal-dose sinograms or “real images” as ground truth data and generating up-sampled sinograms or “predicted image” as output using a GAN implies the use of a discriminator that would discriminate between the real and generated/predicted image.
However, Souza fails to teach wherein the one or more predicted images are free of both the foreign material and artefacts due to the foreign material, the machine learning model is configured to reduce or remove a foreign material from an image, and the method comprises optimizing the trained machine learning model with a discriminator network configured to discriminate between the one or more predicted images and one or more real images free of both the foreign material and of artefacts due to the foreign material, including the machine learning network fine-tuning itself in response to the output of the discriminator network and improving the trained machine learning model (emphasis added).
However Ghani teaches wherein the one or more predicted images are free of both the foreign material and artefacts due to the foreign material (Ghani “The setup for simulated training dataset generation is illustrated in Figure 4. A metal-free scene is created and then multiple metallic object areas are defined with corresponding sinogram data deletions to create multiple training pairs. These training pairs consist of sinograms with data deletions corresponding to metal object locations (incomplete sinograms) and the corresponding sinograms without the metal objects (reference completed sinograms)” section III. A. “Simulated Training Data Generation”. Here, Ghani’s teaching of sinograms are explicitly used to generate artifact free images as shown in Section II (A). Additionally, Ghani specifically refers to the sinograms used in the embodiment of the reference as a sinogram image in Section II (B) and further states that the sinograms used during training are generated from images in Section IV (A)(1). As such, it is clear that Ghani’s teaching of sinograms in the context of imaging and utilizing images in combination with Souza’s teaching of the images can appropriately cover the scope of the predicted images),
the machine learning model is configured to reduce or remove a foreign material from an image (Ghani teaches “a framework we term Deep-MAR to reduce metal artifacts in CT images using adversarial deep learning to perform completion of missing projection data in the sinogram domain (i.e. sinogram completion). The Deep-MAR framework is motivated by and focused on reducing the effects of metal in checkpoint security imagery” in Section II, wherein the process involves “identification and suppression of metal-contaminated projection data” and “the metal-contaminated projection data is then deleted from the original sinogram” as shown in Section II (A)), and
the method comprises optimizing the trained machine learning model with a discriminator network configured to discriminate between the one or more predicted images and one or more real images free of both the foreign material and of artefacts due to the foreign material (Ghani teaches that both the simulated training data and the real data (taught in sections III. A & B) are input into the GAN network, which include images free of both foreign material and artefacts; Ghani discloses that the “discriminator network classifies the full sinogram as real or fake” and “The discriminator network learns to distinguish between ground truth and generator completed sinogram” in section II. B. Learning-Based Sinogram Projection Completion; here, the ground truth is made up of “incomplete-data and true complete-data sinograms”, wherein the “real images” are the true complete-data sinograms and the “generator completed sinograms” are interpreted as equivalent to the predicted images), including the machine learning network fine-tuning itself in response to the output of the discriminator network and improving the trained machine learning model (Ghani teaches that the “trained network [] is later fine-tuned using a smaller amount of real data. This strategy allows us to achieve good performance on real data even with a small real dataset” section III. C. Real Data Transfer Learning. See also the loss function as taught in Section II (B), wherein “the presence of this loss forces the D network to improve its discrimination ability”).
Souza and Ghani are both considered to be analogous to the claimed invention because they are in the same field of reducing noise from objects in medical scans/images. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Souza to incorporate the teachings of Ghani and include “wherein the one or more predicted images are free of both the foreign material and artefacts due to the foreign material, the machine learning model is configured to reduce or remove a foreign material from an image, and the method comprises optimizing the trained machine learning model with a discriminator network configured to discriminate between the one or more predicted images and one or more real images free of both the foreign material and of artefacts due to the foreign material, including the machine learning network fine-tuning itself in response to the output of the discriminator network and improving the trained machine learning model”. The motivation for doing so would have been to “improve CT image reconstruction and analysis”, as suggested by Ghani in section I. A. Contributions, to “to improve [the model’s] discrimination ability and [] force[] the generator network G to become better and better at completing sinograms”, as suggested in section II. B. Learning-Based Sinogram Projection Completion. Therefore, it would have been obvious to combine Souza with Ghani to obtain the invention specified in claim 1.
Regarding claim 2, Souza and Ghani teach the method of claim 1,
wherein the one or more real or simulated images of the foreign material include artefacts due to the foreign material (Souza “modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image” Souza para. [0014]; here, the simulated images have foreign material).
Regarding claim 5, Souza and Ghani teach the method of claim 1, further comprising:
a) generating one or more second simulated images from the one or more real or simulated images of the foreign material, and from the one or more real images of the one or more subjects that are free of the foreign material and of artefacts due to the foreign material, such that the generated second simulated images include the foreign material (Souza “modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image” Souza para. [0014]);
wherein the one or more predicted images are free of artefacts due to the foreign material (Souza teaches that “the output image slice is a corrected image slice in which one or more artifacts (whether resulting from the presence of metal in the image, beam hardening, scatter, and/or any other cause) have been reduced or removed” para. [0082]; “While the first simulated image comprises a “clean” representation of the implant or instrument (e.g., without artifacts)” para. [0092]), the ground truth data comprises the second simulated images (Souza teaches that “the output may then be compared to a “ground truth” image depicting a desired image output” wherein the ground truth data may be “cadaver images augmented with simulated surgical implants or surgical instruments (e.g., based on a computer-aided design (CAD) model of a surgical implant or surgical instrument, respectively) may be further augmented to include simulated artifacts due to metal (e.g., of the implant or instrument), sparse sampling, beam hardening, and scattering” para. [0048]. In para. [0014] the second simulated images are those defined as modified to include the simulated artifacts. Therefore, it is inferred that the described ground truth data may be the second simulated images.), and the machine learning model is configured to reduce or remove artefacts due to the foreign material from an image (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]); and/or
b) generating one or more second simulated images from the one or more real or simulated images of the foreign material, and from the one or more real images of the one or more subjects that are free of the foreign material and of artefacts due to the foreign material, such that the generated second simulated images include artefacts due to the foreign material (Souza “modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image” para. [0014]);
wherein the one or more predicted images are free of the foreign material (Souza teaches that “the output image slice is a corrected image slice in which one or more artifacts (whether resulting from the presence of metal in the image, beam hardening, scatter, and/or any other cause) have been reduced or removed” para. [0082]), the ground truth data comprises the second simulated images (Souza teaches that “the output may then be compared to a “ground truth” image depicting a desired image output” wherein the ground truth data may be “cadaver images augmented with simulated surgical implants or surgical instruments (e.g., based on a computer-aided design (CAD) model of a surgical implant or surgical instrument, respectively) may be further augmented to include simulated artifacts due to metal (e.g., of the implant or instrument), sparse sampling, beam hardening, and scattering” para. [0048]. In para. [0014] the second simulated images are those defined as modified to include the simulated artifacts. Therefore, it is inferred that the described ground truth data may be the second simulated images.), and the machine learning model is configured to reduce or remove the foreign material from an image (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]).
Regarding claim 6, Souza and Ghani teach the method as claimed in claim 1,
wherein the foreign material is titanium alloy, cobalt- chromium alloy, steel, stainless steel, dental amalgam, silver or other metal (Souza teaches “accurately training a neural network to remove artifacts (including sparse sampling artifacts, artifacts from metal, beam hardening, and scattering)” para. [0051]).
Regarding claim 7, Souza and Ghani teach the method as claimed in claim 1,
wherein the machine learning model is configured to reduce or remove a plurality of foreign materials and/or artefacts due to the foreign materials from an image (Souza teaches “accurately training a neural network to remove artifacts (including sparse sampling artifacts, artifacts from metal, beam hardening, and scattering)” para. [0051]).
Regarding claim 8, Souza and Ghani teach the method as claimed in claim 1, further comprising annotating or labelling the one or more first simulated images (Souza discloses that “the input training data for the first-stage (sinogram domain) neural network consists of sparsely-sampled sinograms with an 87.5% reduction in dose. Normal dose sinograms are used as training labels”… “Metal artifact reduced versions of 3D reconstructions with normal dose and augmented images with simulated implants or instruments may be used as training labels” para. [0045]; the fact that these images are used for training labels implies they are labelled accordingly).
Regarding claim 9, Souza teaches system for training a machine learning model for reducing or removing at least one foreign material or artefacts due to the foreign material from an image, the system comprising:
an image simulator (Souza teaches “the representation of the implant or instrument may be an actual image obtained at least in part using an imaging device of the same imaging modality used to obtain the image received in the step 604” para. [0090]) configured to generate one or more first simulated images from one or more real or simulated images of the foreign material (Souza “generating a first simulated image based on the image and the CAD model” para. [0014]; the CAD model represents the real image of the foreign material), and from one or more real images of one or more subjects that are free of the foreign material and of artefacts due to the foreign material (Souza teaches that real images “with and without surgical implants or surgical instruments may be used” para. [0048]), such that the generated simulated images include the foreign material and artefacts due to the foreign material (Souza “the first simulated image depicting the surgical implant and the anatomical portion of the subject; modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image corresponding to the first simulated image” para. [0014])
a machine learning network configured to generate one or more predicted images employing at least the first simulated images, the machine learning network implementing a machine learning model (“providing… the first simulated image to the neural network as an example output” para. [0014]);
wherein the machine learning network is configured to reduce or minimize a difference between the one or more predicted images and ground truth data comprising one or more real or simulated images (Souza “the FBP reconstruction of the up-sampled sinogram as well as the further-processed versions of the simulated images may be provided to the neural network as example inputs, and the FBP reconstructions of normal-dose sinograms as well as the original simulated images (showing the profile of the implant, without any simulated artifacts) are provided to the neural network as example “ground truth” outputs” para. [0070]);
the ground truth data comprises one or more real images of one or more subjects that are free of the foreign material and of artefacts due to the foreign material (Souza “The output may then be compared to a “ground truth” image depicting a desired image output” … “cadaver images of the lumbar and thoracic spine with and without surgical implants or surgical instruments may be used” para. [0048]), such that
the machine learning model is configured to reduce or remove artefacts due to a foreign material from an image (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]).
Souza further discloses the cited “discriminator network” as “a neural network (which may be, for example, a convolutional neural network, a recurrent neural network, or a generative adversarial neural network)” para. [0068], and teaches that the network is “configured to discriminate between the one or more predicted images and one or more real images” in para. [0068] through the following citation: The “a neural network (which may be, for example, a convolutional neural network, a recurrent neural network, or a generative adversarial neural network) may be provided a plurality of corresponding normal-dose sinograms as training labels or “ground truth” data”. The act of taking the normal-dose sinograms or “real images” as ground truth data and generating up-sampled sinograms or “predicted image” as output using a GAN implies the use of a discriminator that would discriminate between the real and generated/predicted image.
However, Souza fails to teach the one or more predicted images are free of both the foreign material and artefacts due to the foreign material, the machine learning model is configured to reduce or remove a foreign material from an image, and the system is configured to optimize the trained machine learning model with a discriminator network configured to discriminate between the one or more predicted images and one or more real images free of both the foreign material and of artefacts due to the foreign material, wherein the machine learning network is configured to fine-tune itself in response to the output of the discriminator network and improving the trained machine learning model (emphasis added).
However Ghani teaches wherein the one or more predicted images are free of both the foreign material and artefacts due to the foreign material (Ghani “The setup for simulated training dataset generation is illustrated in Figure 4. A metal-free scene is created and then multiple metallic object areas are defined with corresponding sinogram data deletions to create multiple training pairs. These training pairs consist of sinograms with data deletions corresponding to metal object locations (incomplete sinograms) and the corresponding sinograms without the metal objects (reference completed sinograms)” section III. A. “Simulated Training Data Generation”. Here, Ghani’s teaching of sinograms are explicitly used to generate artifact free images as shown in Section II (A). Additionally, Ghani specifically refers to the sinograms used in the embodiment of the reference as a sinogram image in Section II (B) and further states that the sinograms used during training are generated from images in Section IV (A)(1). As such, it is clear that Ghani’s teaching of sinograms in the context of imaging and utilizing images in combination with Souza’s teaching of the images can appropriately cover the scope of the predicted images),
the machine learning model is configured to reduce or remove a foreign material from an image (Ghani teaches “a framework we term Deep-MAR to reduce metal artifacts in CT images using adversarial deep learning to perform completion of missing projection data in the sinogram domain (i.e. sinogram completion). The Deep-MAR framework is motivated by and focused on reducing the effects of metal in checkpoint security imagery” in Section II, wherein the process involves “identification and suppression of metal-contaminated projection data” and “the metal-contaminated projection data is then deleted from the original sinogram” as shown in Section II (A)), and
optimizing the trained machine learning model with a discriminator network configured to discriminate between the one or more predicted images and one or more real images free of both the foreign material and of artefacts due to the foreign material (Ghani teaches that both the simulated training data and the real data (taught in sections III. A & B) are input into the GAN network, which include images free of both foreign material and artefacts; Ghani discloses that the “discriminator network classifies the full sinogram as real or fake” and “The discriminator network learns to distinguish between ground truth and generator completed sinogram” in section II. B. Learning-Based Sinogram Projection Completion; here, the ground truth is made up of “incomplete-data and true complete-data sinograms”, wherein the “real images” are the true complete-data sinograms and the “generator completed sinograms” are interpreted as equivalent to the predicted images), including the machine learning network fine-tuning itself in response to the output of the discriminator network and improving the trained machine learning model (Ghani teaches that the “trained network [] is later fine-tuned using a smaller amount of real data. This strategy allows us to achieve good performance on real data even with a small real dataset” section III. C. Real Data Transfer Learning. See also the loss function as taught in Section II (B), wherein “the presence of this loss forces the D network to improve its discrimination ability”).
Souza and Ghani are both considered to be analogous to the claimed invention because they are in the same field of reducing noise from objects in medical scans/images. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Souza to incorporate the teachings of Ghani and include “the one or more predicted images are free of both the foreign material and artefacts due to the foreign material, the machine learning model is configured to reduce or remove a foreign material from an image, and the system is configured to optimize the trained machine learning model with a discriminator network configured to discriminate between the one or more predicted images and one or more real images free of both the foreign material and of artefacts due to the foreign material, wherein the machine learning network is configured to fine-tune itself in response to the output of the discriminator network and improving the trained machine learning model”. The motivation for doing so would have been to “improve CT image reconstruction and analysis”, as suggested by Ghani in section I. A. Contributions, to “to improve [the model’s] discrimination ability and [] force[] the generator network G to become better and better at completing sinograms”, as suggested in section II. B. Learning-Based Sinogram Projection Completion. Therefore, it would have been obvious to combine Souza with Ghani to obtain the invention specified in claim 9.
Regarding claim 10, Souza and Ghani teach the system of claim 9,
wherein the one or more real or simulated images of the foreign material include artefacts due to the foreign material (Souza “modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image” Souza para. [0014]; here, the simulated images have foreign material).
Regarding claim 13, Souza and Ghani teach the system of claim 9, further configured:
a) to generate one or more second simulated images from the one or more real or simulated images of the foreign material, and from the one or more real images of the one or more subjects that are free of the foreign material and of artefacts due to the foreign material, such that the generated second simulated images include the foreign material (Souza “modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image” Souza para. [0014]);
wherein the one or more predicted images are free of artefacts due to the foreign material (Souza teaches that “the output image slice is a corrected image slice in which one or more artifacts (whether resulting from the presence of metal in the image, beam hardening, scatter, and/or any other cause) have been reduced or removed” para. [0082]; “While the first simulated image comprises a “clean” representation of the implant or instrument (e.g., without artifacts)” para. [0092]), the ground truth data comprises the second simulated images (Souza teaches that “the output may then be compared to a “ground truth” image depicting a desired image output” wherein the ground truth data may be “cadaver images augmented with simulated surgical implants or surgical instruments (e.g., based on a computer-aided design (CAD) model of a surgical implant or surgical instrument, respectively) may be further augmented to include simulated artifacts due to metal (e.g., of the implant or instrument), sparse sampling, beam hardening, and scattering” para. [0048]. In para. [0014] the second simulated images are those defined as modified to include the simulated artifacts. Therefore, it is inferred that the described ground truth data may be the second simulated images.), and the machine learning model is configured to reduce or remove artefacts due to the foreign material from an image (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]); and/or
b) to generate one or more second simulated images from the one or more real or simulated images of the foreign material, and from the one or more real images of the one or more subjects that are free of the foreign material and of artefacts due to the foreign material, such that the generated second simulated images include artefacts due to the foreign material (Souza “modifying the simulated image to include simulated artifacts from metal, beam hardening, and scatter, to yield a second simulated image” para. [0014]);
wherein the one or more predicted images are free of the foreign material (Souza teaches that “the output image slice is a corrected image slice in which one or more artifacts (whether resulting from the presence of metal in the image, beam hardening, scatter, and/or any other cause) have been reduced or removed” para. [0082]), the ground truth data comprises the second simulated images (Souza teaches that “the output may then be compared to a “ground truth” image depicting a desired image output” wherein the ground truth data may be “cadaver images augmented with simulated surgical implants or surgical instruments (e.g., based on a computer-aided design (CAD) model of a surgical implant or surgical instrument, respectively) may be further augmented to include simulated artifacts due to metal (e.g., of the implant or instrument), sparse sampling, beam hardening, and scattering” para. [0048]. In para. [0014] the second simulated images are those defined as modified to include the simulated artifacts. Therefore, it is inferred that the described ground truth data may be the second simulated images.), and the machine learning model is configured to reduce or remove the foreign material from an image (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]).
Regarding claim 14, Souza and Ghani teach the system as claimed in claim 9,
wherein the foreign material is titanium alloy, cobalt- chromium alloy, steel, stainless steel, dental amalgam, silver or other metal (Souza teaches “accurately training a neural network to remove artifacts (including sparse sampling artifacts, artifacts from metal, beam hardening, and scattering)” para. [0051]).
Regarding claim 15, Souza and Ghani teach the system as claimed in claim 9,
wherein the machine learning model is configured to reduce or remove a plurality of foreign materials and/or artefacts due to the foreign materials from an image (Souza teaches “accurately training a neural network to remove artifacts (including sparse sampling artifacts, artifacts from metal, beam hardening, and scattering)” para. [0051]).
Regarding claim 16, Souza and Ghani teach the system as claimed in claim 9, further comprising annotating or labelling the one or more first simulated images (Souza discloses that “the input training data for the first-stage (sinogram domain) neural network consists of sparsely-sampled sinograms with an 87.5% reduction in dose. Normal dose sinograms are used as training labels”… “Metal artifact reduced versions of 3D reconstructions with normal dose and augmented images with simulated implants or instruments may be used as training labels” para. [0045]; the fact that these images are used for training labels implies they are labelled accordingly).
Regarding claim 17, Souza and Ghani teach a method for reducing or removing at least one foreign material or artefacts due to the foreign material from an image, the method comprising:
reducing or removing from an image of a subject at least one foreign material or artefact due to the foreign material, or both the at least one foreign material and the artefact due to the foreign material, using a machine learning model trained according to the method of claim 1 (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]; “accurately training a neural network to remove artifacts (including sparse sampling artifacts, artifacts from metal, beam hardening, and scattering)” para. [0051]). Please see the discussion of claim 1 above for a discussion of those limitations.
Regarding claim 18, Souza and Ghani teach a system for reducing or removing at least one foreign material or artefacts due to the foreign material from an image (Souza, “system” see para. [0022]; FIG. 1), the system being configured to reduce or remove from an image of a subject at least one foreign material or artefact due to the foreign material, or both the at least one foreign material and the artefact due to the foreign material, using a machine learning model trained according to the method of claim 1 (Souza “remove, from the initial reconstruction and using a second neural network, one or more artifacts in the initial reconstruction to yield a final output volume” para. [0022]). Please see the discussion of claim 1 above for a discussion of those limitations.
Regarding claim 20, Souza and Ghani teach a non-transitory computer-readable medium, comprising a computer program comprising program code configured, when executed by one of more computing devices, to implement the method of claim 1 (Souza teaches “the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit” …”Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer)” para. [0040]).
Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Souza et al. (U.S. Publication No. 2022/0240879 A1), hereinafter Souza in view of Ghani et al. (“Fast Enhanced CT Metal Artifact Reduction Using Data Domain Deep Learning”), hereinafter Ghani and Al-Hashmy et al. (U.S. Publication No. 20220018811 A1), hereinafter Al-Hashmy.
Regarding claim 21, Souza and Ghani teach the method as claimed in claim 1.
Souza and Ghani fail to teach wherein the foreign material is a ceramic, glass, polymeric, a composite, a glass-ceramic, or a biomaterial.
However, Al-Hashmy teaches wherein the foreign material is a ceramic, glass, polymeric, a composite, a glass-ceramic, or a biomaterial (Al-Hashmy teaches a method of using ML techniques to denoise ultrasound scans wherein “the solution can analyze the UT scan images and detect or predict aberrations in the areas under observation, whether it be in metallic or nonmetallic assets, including, for example, assets containing composite materials, such as, for example, glass fiber-based composites, epoxy resin-based composites, or fiberglass-reinforced plastic (FRP) composites” in para. [0038]).
Souza, Ghani, and Al-Hashmy are both considered to be analogous to the claimed invention because they are in the same field of reducing noise from objects in medical scans/images. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Souza (as modified by Ghani) to incorporate the teachings of Al-Hashmy and include “wherein the foreign material is a ceramic, glass, polymeric, a composite, a glass-ceramic, or a biomaterial”. The motivation for doing so would have been to “satisfy[y] an urgent and unmet need for a mechanism that can effectively, efficiently and accurately predict damage or failure in assets, regardless of whether the assets are made of a metallic or nonmetallic material, such as, for example, a composite material”, as suggested by Al-Hashmy in para. [0038]. Therefore, it would have been obvious to combine Souza and Ghani with Al-Hashmy to obtain the invention specified in claim 21.
Regarding claim 22, Souza and Ghani teach the system as claimed in claim 9.
Souza and Ghani fail to teach wherein the foreign material is a ceramic, glass, polymeric, a composite, a glass-ceramic, or a biomaterial.
However, Al-Hashmy teaches wherein the foreign material is a ceramic, glass, polymeric, a composite, a glass-ceramic, or a biomaterial (Al-Hashmy teaches a method of using ML techniques to denoise ultrasound scans wherein “the solution can analyze the UT scan images and detect or predict aberrations in the areas under observation, whether it be in metallic or nonmetallic assets, including, for example, assets containing composite materials, such as, for example, glass fiber-based composites, epoxy resin-based composites, or fiberglass-reinforced plastic (FRP) composites” in para. [0038]).
Souza, Ghani, and Al-Hashmy are both considered to be analogous to the claimed invention because they are in the same field of reducing noise from objects in medical scans/images. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Souza (as modified by Ghani) to incorporate the teachings of Al-Hashmy and include “wherein the foreign material is a ceramic, glass, polymeric, a composite, a glass-ceramic, or a biomaterial”. The motivation for doing so would have been to “satisfy[y] an urgent and unmet need for a mechanism that can effectively, efficiently and accurately predict damage or failure in assets, regardless of whether the assets are made of a metallic or nonmetallic material, such as, for example, a composite material”, as suggested by Al-Hashmy in para. [0038]. Therefore, it would have been obvious to combine Souza and Ghani with Al-Hashmy to obtain the invention specified in claim 22.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office
action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the
extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from
the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH
shortened statutory period, then the shortened statutory period will expire on the date the advisory
action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing
date of the advisory action. In no event, however, will the statutory period for reply expire later than
SIX MONTHS from the date of this final action.
Contact
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to KYLA G ALLEN whose telephone number is (703)756-5315. The examiner can
normally be reached M-F 7:30am - 4:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a
USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use
the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor,
John Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this
application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from
Patent Center. Unpublished application information in Patent Center is available to registered users. To
file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and
https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional
questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like
assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or
571-272-1000.
/Kyla Guan-Ping Tiao Allen/
Examiner, Art Unit 2661
/JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661