Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the weight decision section" in line 10. There is insufficient antecedent basis for this limitation in the claim. The “weight decision section” is not introduced before. It is suggested changing “the weight decision section” into “a weight decision section”, for clarifying the claim. The same remarks apply to claims 16 and 17.
Claim Interpretation
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are:
-- “an imaging unit that collects a nuclear magnetic …”; “an image processing unit that processes the image reconstructed …”, in claim 16.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Noguchi et al, (US-PGPUB 20190137589) in view of Yamanaka et al, (US-PGPUB 20170116730); and further in view of Makii et al, (US-PGPUB 20090245684)
In regards to claim 1, Noguchi discloses an image processing device, (Fig. 1,
Par. 0022, “MRI device shown in Fig. 1”), that generates and presents a third medical image, (Fig. 2, Par. 0022, corrected image is implicitly generated by image correction unit 202), by using a first medical image acquired by a medical imaging apparatus, (Fig. 2, Par. 0022, observation unit 100 observes a test object and outputs observation data; and from Par. 0026, conversion unit 200 converts the observation data observed with the observation unit 100 into an image by Fourier transformation, [i.e., the image generated by the conversion unit 200, “image based Fourier transformation corresponds to the first MR image], and a second medical image obtained by performing processing of reducing noise and artifacts with respect to the first medical image, (see at least: Fig. 2, and Par. 0026, noise reduction unit 201 reduces a noise in the observation data converted into the image, [i.e., obtaining second MR image by reducing noise of observation data converted into the image, “first MRI image”]), the image processing device, (MR device of Fig. 1), comprising one or more processors configured to:
take a difference for each pixel between the first medical image and the second medical image and generates a difference image, (see at least: Fig. 0037, the image correction unit 202 is configured of a separating unit 400, a correction level calculator 401, and an image correction unit 402. Further, from Par. 0042, correction level calculator 401 calculates the correction levels at each position in the input image, …. By performing a difference value between the input image and the noise reduced image, [i.e., take a difference for each pixel between the first MR image, “input image”, and the second MR image, “noise reduced image”, and generates a difference image, “implicit by performing a difference value between the input image and the noise reduced image]).
Noguchi does not expressly disclose calculate a weighting value for each pixel by using the difference image; receive a user instruction including a change of the weighting value for each pixel and decides on a final weighting value; and use the weighting value decided on by the weight decision section to combine the first medical image and the second medical image through weighted averaging for each pixel.
However, Yamanaka discloses calculating a weighting value for each pixel by using the difference image, (see at least: Fig. 7, implicitly determining weighting value for each pixel based on signal value difference between pixels using on a straight line representing a relation between a signal value of a pixel (or a signal value difference between pixels) and a weight coefficient, [i.e., calculate a weighting value for each pixel, “implicitly determining weighting value for each pixel using on a straight line”, by using the difference image, “implicit by signal value difference between pixels”]); and
receive a user instruction including a change of the weighting value for each pixel and decides on a final weighting value, (see at least: Par. 0098, the control unit 21 displays, on the display unit 25, a straight line representing a relation between the signal value difference between the pixels, and the weight coefficient as illustrated in FIG. 7; and the control unit 21 adjusts the weight coefficient for each pixel within the designated region in accordance with adjustment operation by the user for a slope and an intercept (bias) of the displayed straight line (adjustment operation by means of the operation unit 24 (input device); and from Par. 0096, In a case where a plurality of regions is designated, the weight coefficient can be adjusted for each of the designated regions, [by user interface], [i.e., receiving a user instruction, “implicit by the user’s input device 24”, including a change of the weighting value for each pixel, “adjusting the weight coefficient for each pixel within the designated region in accordance with adjustment operation by the user”]. Consequently, the most suitable weight coefficient can be set for each region, [i.e., deciding on a final weighting value, “implicit by setting the most suitable weight coefficient, by the control unit 21”]).
Noguchi and Yamanaka are combinable because they are both concerned with medical imaging processing. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Noguchi, to use the control unit 21, as though by Yamanaka, in order to adjust the weight coefficient for each pixel within the designated region in accordance with adjustment operation by the user, (Yamanaka, Par. 0098)
The combine teaching Noguchi and Yamanaka as whole does not expressly disclose using the weighting value decided on by the weight decision section to combine the first medical image and the second medical image through weighted averaging for each pixel.
Makii discloses using the weighting value decided on by the weight decision section to combine the first medical image and the second medical image through weighted averaging for each pixel, (see at least: Par. 0295, a user can perform the operation of changing the weighting coefficients of the image data elements; and from Par. 0300, the CPU 31 further performs, in addition to reflecting the changed weighting coefficient, “i.e., using the weighting value decided on by the weight decision section”, a combination process using weighted averages of the image data elements within the combination range to generate a preview image, ”implicitly combining the first image and the second image”).
Noguchi, Yamanaka, and Makii are combinable because they are all concerned with weighting based imaging processing. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify combine teaching Noguchi and Yamanaka, to use the CPU 31, as though by Makii, in order to perform a combination process using weighted averages of the image data elements to generate a preview image, (Par. 0300).
In regards to claim 2, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 1.
Noguchi further discloses wherein the one or more processers comprising: a noise reduction section that generates the second MR image in which noise and artifacts are reduced with respect to the first MR image, (see at least: Par. 0026, the noise reduction unit 201 reduces a noise in the observation data converted into the image, [i.e., the noise reduction unit 201 corresponds to the noise reduction section]).
In regards to claim 3, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 1
Noguchi further discloses wherein the one or more processers comprising: a region specification section that uses the difference image to specify a region where noise and artifacts have occurred in the first MR image, (see at least: Fig. 5, Par. 0043, correction level map 503 uses the luminance, with which the correction level is 0 for a region falling below a predetermined threshold and the correction level is 1 for a region exceeding the predetermined threshold in the broad luminance component 502, as an evaluation index, [i.e., using the difference image to specify a region where noise and artifacts have occurred in the first MR image, “implicit by detecting the region (with the correction level is 0), falling below a predetermined threshold”).
Regarding claim 16, claim 16 recites substantially similar limitations as set forth in claim 1. As such, claim 16 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “a medical imaging apparatus comprising: an imaging unit that collects a medical image of a subject; and an image processing unit that processes the medical image acquired by the imaging unit, where the includes a noise reduction section that generates a second medical image in which noise and artifacts are reduced with respect to an original image acquired by the imaging unit.
However, Noguchi discloses the medical imaging apparatus, (see at least: Par. 0022, MRI device of Fig. 1), comprising an imaging unit that collects a medical image of a subject, (see at least: Fig. 1, and Par. 0022, implicit by using MRI device that acquires medical image(s)); and an image processing unit that processes the medical image acquired by the imaging unit, (see at least: Par. 0023, implicit by the central processing unit (CPU) 108 that process the acquired image), where the includes a noise reduction section that generates a second medical image in which noise and artifacts are reduced with respect to an original image acquired by the imaging unit, (see at least: Par. 0026, the noise reduction unit 201 reduces a noise in the observation data converted into the image, [i.e., the noise reduction unit 201 corresponds to the noise reduction section]).
Regarding claim 17, claim 17 recites substantially similar limitations as set forth in claim 1. As such, claim 17 is rejected for at least similar rational.
The Examiner further acknowledged the following additional limitation(s): “an image processing method ….). However, Noguchi discloses the “image processing method …., (see at least: Par. 0021, “an image acquisition method of a diagnostic imaging device that performs noise reduction processing …).
In regards to claim 19, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 17
Yamanaka further discloses wherein receiving the user instruction is receiving any one of the weighting value, a weight which is a function of the weighting value and the weight coefficient, or the weight coefficient, (see at least: Par. 0098, the control unit 21 displays, on the display unit 25, a straight line representing a relation between the signal value difference between the pixels, and the weight coefficient as illustrated in FIG. 7, [i.e., wherein receiving the user instruction is implicitly receiving a weight coefficient ])
In regards to claim 20, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 17.
Makii further discloses before receiving the user instruction, generating a provisional composite image by weighted averaging the first medical image and the second medical image for each pixel using the calculated weighting value, (see at least: Par. 0169, combination processing unit 53 may further perform a combination process on combination-use image data of a plurality of frames using weighted averages so as to generate combined-image data representing a still image, [i.e., generating combination or composite image using the combination processing unit 53 using the weighted averages, implicitly before receiving the user instruction]; and displaying the provisional composite image and a display bar of a weight, wherein the user instruction is received through an operation of the display bar, (see at least: Par. 0363, when the user selects a coefficient template, as shown in FIG. 13, while the selection image list and the weight bars are displayed, the combination processing unit 53 of the CPU 31 display, as a preview image, a combined image obtained using weighted averages with the weighting coefficients applied, [“i.e., implicitly receiving the user’s instruction through the weight bars”]).
Claims 6, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Noguchi, Yamanaka, and Makii, as applied to claims 3 and 17 above; and further in view of Mercuriev et al, (US-PGPUB 20130216117)
In regards to claim 6, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 3
The combine teaching Noguchi, Yamanaka, and Makii does not expressly disclose wherein the region specification section specifies the region where noise and artifacts have occurred based on a threshold value for a difference of pixel values calculated by the one or more processers.
However, Mercuriev discloses wherein the region specification section specifies the region where noise and artifacts have occurred based on a threshold value for a difference of pixel values calculated by the one or more processers, (see at least: Par. 0105-0106, Euclidean distance between pixel blocks of the images is calculated; to determine pixel similarity (intensity similarity), where the pixel similarity of frames is calculated based on the following threshold function (14), [i.e., the pixel similarity between the de-noised and original images is computed for each pixel, which implicit that pixel differences other than zero specify a region in the first image where noise and artifacts have occurred]).
Noguchi, Yamanaka, Makii, and Mercuriev are combinable, because they are all concerned with medical imaging processing. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify combine teaching Noguchi, Yamanaka, and Makii, to calculate Euclidean distance, as though by Mercuriev, in order to determine pixel similarity based on threshold, (Mercuriev, Par. 0106)
In regards to claim 18, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 17.
The combine teaching Noguchi, Yamanaka, and Makii does not expressly disclose wherein, in the calculation of the weighting value, a weight coefficient α with respect to a pixel value is calculated using the difference image, and the weight coefficient α is used as an exponent of a fixed weight W to decide on the weighting value for each pixel, which is denoted by W.sup.α.
However, Mercuriev discloses wherein, in the calculation of the weighting value, a weight coefficient α with respect to a pixel value is calculated using the difference image, and the weight coefficient α is used as an exponent of a fixed weight W to decide on the weighting value for each pixel, which is denoted by W.sup.α., (see at least: Par. 0106, a fixed weight is implicitly used with a weight coefficient calculated using the difference image, as shown in eq. (6)).
Noguchi, Yamanaka, Makii, and Mercuriev are combinable, because they are all concerned with medical imaging processing. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify combine teaching Noguchi, Yamanaka, and Makii, to use the fixed weight with a weight coefficient calculated using the difference image, in order to suppress noise in digital medical images, (Mercuriev, Par. 0002)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Noguchi, Yamanaka, and Makii, as applied to claim 3 above; and further in view of Takemoto et al, (US-PGPUB 20200098149)
The combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 1
The combine teaching Noguchi, Yamanaka, and Makii as whole does not expressly disclose a UI unit that receives a designation of a user, wherein the region specification section specifies a region designated by the user via the UI unit as the region where noise and artifacts have occurred.
However, Takemoto discloses a UI unit, (1000 in Fig. 1), that receives a designation of a user, wherein the region specification section, (1000), specifies a region designated by the user via the UI unit as the region where noise and artifacts have occurred, (see at least: Par. 0035, the user is able to adjust the object region, by designating noise regions, such as regions of the object that were not determined to be the object and regions outside the object that were determined to be the object; and from Par. 0040, the user is able to designate a region of the object, by carrying out a drag operation of the mouse, which is the input device 150, [i.e., a UI unit, (150 in Fig. 1), that receives a designation of a user, “the input device 150 implicitly receives the user input that designating noise regions”, wherein the region specification section, “1000 in Fig. 1”, specifies a region designated by the user via the UI unit as the region where noise and artifacts have occurred, “the processing apparatus 1000 implicitly specifies the region designated by the user, by using the extraction color acquisition unit 1030”])
Noguchi, Yamanaka, Makii, and Takemoto are combinable, because they are all concerned with reducing noise in medical image(s). Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Noguchi, Yamanaka, and Makii, to use the input device 150 for designating noise regions by the user, as though by Takemoto, in order to extract the object region from the image, (Takemoto, Par. 0001)
Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Noguchi, Yamanaka, and Makii, as applied to claim 1 above; and further in view of Nakamura et al, (US-PGPUB 20200202486)
In regards to claim 8, the combine teaching Noguchi, Yamanaka, and Makii as whole discloses the limitations of claim 1
The combine teaching Noguchi, Yamanaka, and Makii as whole does not expressly disclose wherein the one or more processers comprising: a display controller that controls an image to be displayed on a display device, and the display controller displays a composite image generated by the one or more processers, and the difference image or a part of the difference image, on the display device.
However, Nakamura discloses a display controller, that controls an image to be displayed on a display device, and the display controller displays a composite image generated by the one or more processers, and the difference image or a part of the difference image, on the display device, (see at least: Fig. 2, and par. 0051-0052, combining processing for generating a composite image in which the difference image is superimposed on at least one of the first three-dimensional image V1 or the second three-dimensional image V2; ….where the display controller 26 implicitly controls the display device 14 for displaying the composite image on the display unit 14 so that first information indicating a tomographic image generation range and second information indicating a difference image generation range in at least one of the first three-dimensional image V1 or the second three-dimensional image V2 are displayed in the composite image, [i.e., display controller implicitly (14 in Fig. 2), controls an image to be displayed on a display device, “the display controller 26 implicitly controls the display device 14”, and the display controller displays a composite image generated by the one or more processers, “displaying the composite image on the display unit 14”, and the difference image or a part of the difference image, “implicit by the difference image generation range”, on the display device, “14 in Fig. 2”]).
Noguchi, Yamanaka, Makii, and Nakamura are combinable, because they are all concerned with processing medical image(s). Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Noguchi, Yamanaka, and Makii, to use the display controller 26, as though by Nakamura, in order to easily recognize the generation range of a difference image in a three-dimensional image, (Nakamura, Par. 0028)
In regards to claim 9, the combine teaching Noguchi, Yamanaka, Makii, and Nkamura as whole discloses the limitations of claim 1.
Furthermore, Nakamura discloses wherein the display controller displays the composite image in a first color, and superimposes and displays the difference image or the part of the difference image in a second color that is different from the first color, on the display device, (Nakamura, see at least: Par. 0063-0064, combining unit 24 generates a color image by assigning a preset color to the difference image Vsub, and generates the composite image Vg by superimposing the color image on the first 3D image V1 that is a monochrome image, [i.e., displaying the composite image in a first color, and superimposes and displays the difference image or the part of the difference image in a second color that is different from the first color, ‘implicit by superimposing the pre-assigned color difference image on the composite image”]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Noguchi, Yamanaka, Makii, and Nakamura, as applied to claim 8 above; and further in view of Takahashi et al, (US-PGPUB 20090322863)
The combine teaching Noguchi, Yamanaka, Makii, and Nakamura as whole discloses the limitations of claim 8.
The combine teaching Noguchi, Yamanaka, Makii, and Nakamura as whole does not expressly disclose discloses wherein the display controller displays a pixel of the difference image whose pixel value is equal to or greater than a predetermined threshold value on the display device.
However, Takahashi discloses wherein the display controller displays a pixel of the difference image whose pixel value is equal to or greater than a predetermined threshold value on the display device, (see at least: Par. 0030, when difference between distance information about a pixel of the imaging device and distance information about pixels in the vicinity of the pixel is greater than or equal to a predetermined threshold value, the display unit may display the pixel in such a manner that the difference is emphasized, [i.e., the display controller displays a pixel of the difference image whose pixel value is equal to or greater than a predetermined threshold value on the display device, “implicit by displaying the pixel n such a manner that the difference is emphasized based on pixel(s) that is greater than or equal to a predetermined threshold value]).
Noguchi, Yamanaka, Makii, Nakamura, and Takahashi are combinable, because they are all concerned with processing image(s). Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Noguchi, Yamanaka, Makii, and Nakamura, to compare the pixel difference to the threshold, as though by Takahashi, in order to display the pixel in such a manner that the difference is emphasized, (Par. 0030)
Allowable Subject Matter
Claims 4-5, and 12-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With respect to claim 4, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):
“wherein the one or more processers vary a conditional expression used to calculate the weighting value between the region specified by the region specification section and the other region”
The relevant prior art of record, Noguchi et al, (US-PGPUB 20190137589),
discloses an image processing device, (Fig. 1, Par. 0022, “MRI device shown in Fig. 1”), that generates and presents a third MR image, (Fig. 2, Par. 0022, corrected image is implicitly generated by image correction unit 202), by using a first MR image acquired by a magnetic resonance imaging apparatus, (Fig. 2, Par. 0022, observation unit 100 observes a test object and outputs observation data; and from Par. 0026, conversion unit 200 converts the observation data observed with the observation unit 100 into an image by Fourier transformation, [i.e., the image generated by the conversion unit 200, “image based Fourier transformation corresponds to the first MR image], and a second MR image obtained by performing processing of reducing noise and artifacts with respect to the first MR image, (see at least: Fig. 2, and Par. 0026, noise reduction unit 201 reduces a noise in the observation data converted into the image, [i.e., obtaining second MR image by reducing noise of observation data converted into the image, “first MRI image”]), the image processing device, (MR device of Fig. 1), comprising one or more processors configured to:
take a difference for each pixel between the first MR image and the second MR image and generates a difference image, (see at least: Fig. 0037, the image correction unit 202 is configured of a separating unit 400, a correction level calculator 401, and an image correction unit 402. Further, from Par. 0042, correction level calculator 401 calculates the correction levels at each position in the input image, …. By performing a difference value between the input image and the noise reduced image, [i.e., take a difference for each pixel between the first MR image, “input image”, and the second MR image, “noise reduced image”, and generates a difference image, “implicit by performing a difference value between the input image and the noise reduced image]).
However, Noguchi fails to teach or suggest, either alone or in combination with the other cited references, varying a conditional expression used to calculate the weighting value between the region specified by the region specification section and the other region.
A further prior art of record, Yamanaka et al, (US-PGPUB 20170116730), discloses calculating a weighting value for each pixel by using the difference image, (see at least: Fig. 7, implicitly determining weighting value for each pixel based on signal value difference between pixels using on a straight line representing a relation between a signal value of a pixel (or a signal value difference between pixels) and a weight coefficient, [i.e., calculate a weighting value for each pixel, “implicitly determining weighting value for each pixel using on a straight line”, by using the difference image, “implicit by signal value difference between pixels”]); and
receive a user instruction including a change of the weighting value for each pixel and decides on a final weighting value, (see at least: Par. 0098, the control unit 21 displays, on the display unit 25, a straight line representing a relation between the signal value difference between the pixels, and the weight coefficient as illustrated in FIG. 7; and the control unit 21 adjusts the weight coefficient for each pixel within the designated region in accordance with adjustment operation by the user for a slope and an intercept (bias) of the displayed straight line (adjustment operation by means of the operation unit 24 (input device); and from Par. 0096, In a case where a plurality of regions is designated, the weight coefficient can be adjusted for each of the designated regions, [by user interface], [i.e., receiving a user instruction, “implicit by the user’s input device 24”, including a change of the weighting value for each pixel, “adjusting the weight coefficient for each pixel within the designated region in accordance with adjustment operation by the user”]. Consequently, the most suitable weight coefficient can be set for each region, [i.e., deciding on a final weighting value, “implicit by setting the most suitable weight coefficient, by the control unit 21”]). However, Yamanaka fails to teach or suggest, either alone or in combination with the other cited references, varying a conditional expression used to calculate the weighting value between the region specified by the region specification section and the other region.
With respect to claim 5, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):
“wherein the one or more processors calculate the weighting value such that a weight of a pixel of the second medical image is greater than a weight of a pixel of the first medical image for the region specified by the region specification section, and a weight of a pixel of the first medical image is greater than a weight of a pixel of the second medical image for a region other than the region specified by the region specification section.”
The prior art of record, Noguchi et al, (US-PGPUB 20190137589), and Yamanaka et al, (US-PGPUB 20170116730) stated above with respect to claim 4, apply also to claim 5, but none of them, either alone or in combination, teach or suggest the above underlined limitation(s) of claim 5.
With respect to claim 11, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):
“wherein the one or more processors calculate a weight coefficient α with respect to a pixel value by using the difference image and calculates the weighting value for each pixel by using a fixed weight W (W = 0 to 1) and the weight coefficient”
The prior art of record, Noguchi et al, (US-PGPUB 20190137589), and Yamanaka et al, (US-PGPUB 20170116730) stated above with respect to claim 4, applies also to claim 12, but fails teach or suggest, either alone or in combination, the above underlined limitation(s) of claim 12.
The prior art of record, Mercuriev et al, (US-PGPUB 20130216117) discloses specifying the region where noise and artifacts have occurred based on a threshold value for a difference of pixel values calculated by the one or more processers, (see at least: Par. 0105-0106, Euclidean distance between pixel blocks of the images is calculated; to determine pixel similarity (intensity similarity), where the pixel similarity of frames is calculated based on the following threshold function (14), [i.e., the pixel similarity between the de-noised and original images is computed for each pixel, which implicit that pixel differences other than zero specify a region in the first image where noise and artifacts have occurred]). Mercuriev further discloses a fixed weight e is used with a weight coefficient calculated using the difference image, (Par. 0116. eq. (16)); but fails to teach or suggest, either alone or in combination with the other cited references, calculate a weight coefficient α with respect to a pixel value by using the difference image and calculates the weighting value for each pixel by using a fixed weight W (W = 0 to 1) and the weight coefficient
Another prior art of record, Fukuda et al, (US-PGPUB 20210183062), discloses generation unit 92 for generating image data of a difference image by multiplying a value obtained by subtracting image data (pixel value) of the second radiation for each corresponding pixel from image data (each pixel value) of the first radiographic image by a weight coefficient, (Par. 0051); but fails to teach or suggest, either alone or in combination with the other cited references, wherein the one or more processors calculate a weight coefficient α with respect to a pixel value by using the difference image and calculates the weighting value for each pixel by using a fixed weight W (W = 0 to 1) and the weight coefficient.
Regarding Claims 12-14, claims 12-14 are also allowable at least in view of their dependency from claim 11.
With respect to claim 15, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following underlined limitation(s), (in consideration of the claim as a whole):
“wherein the one or more processers includes a processor that receives the difference image as an input and that outputs a noise pattern, and calculates the weighting value for each pixel by selecting, based on a correspondence between various predetermined noise patterns and calculation algorithms for weighting values, a calculation algorithm corresponding to the noise pattern output by the processor”
The prior art of record, Noguchi et al, (US-PGPUB 20190137589), and Yamanaka et al, (US-PGPUB 20170116730) stated above with respect to claim 4, apply also to claim 15, but none of them, either alone or in combination, teach or suggest the above underlined limitation(s) of claim 15.
A further prior art of record, Lee et al, (US-PGPUB 20230326099) discloses receives the difference image as an input and that outputs a noise pattern, (see at least: Par. 0114, the noise pattern map NP may be a pattern map that is generated based on the noise detected by comparing an image reconstructed by applying a general artificial neural network model to a sub-sampled magnetic resonance image with a full sampled magnetic resonance image, [i.e., receives the difference image, “difference between the reconstructed image and the full sampled magnetic resonance image”, as an input, “implicitly input to ML”, and that outputs a noise pattern]).
However, Lee et al, fails to teach or suggest, either alone or in combination with the other cited references, calculating the weighting value for each pixel by selecting, based on a correspondence between various predetermined noise patterns and calculation algorithms for weighting values, a calculation algorithm corresponding to the noise pattern output by the processor.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMARA ABDI/Primary Examiner, Art Unit 2668 01/31/2026