DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a continuation of the International Application No. PCT/CN2022/118439, filed on September 13, 2022. As we as acknowledgement of Applicant’s claim of priority to and the benefit of CN202111070115.4, with filing date of September 13, 2021. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(a)-(d) or (f).
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 06/03/2024 and 11/27/2024 have been reviewed and the listed references have been considered.
Status of Claims
Claims 1-18, 24, and 26 are pending. Claims 19-23 and 25 are cancelled.
Drawings
The 5-page drawings have been considered and placed on record in the file.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Objections
Claims 1 and 14 are objected to because of the following informalities:
Claim 1 recites “Obtaining one ore more…” should be “[[O]]obtaining one or more…”
Claim 14 recites:
“Obtaining a first training set…” should be “[[O]]obtaining a first training set…”
“Training the initial image…” should be “[[T]]training the initial image…”
Appropriate corrections are required.
Claim Interpretation
Claim 1, 24, and 26 states “…simultaneously perform decomposition processing and at least one of noise reduction processing or artifact removal processing on the one or more initial material density images…”. The limitation as recited in the claims states that process of decomposition and noise reduction or artifact removal is occurring simultaneously. However, according the specification paragraph 103 discloses “In some embodiments, the M material density images may be input into the neural network unit respectively or at the same time to obtain corresponding M material decomposition images”. In light of the specification, the claimed limitation is interpretated to be the M material density images can be processed by the image processing model at the same time Therefore, for examination purposes examiner has interpreted the limitation in light of the specification, please modify the language of the claim accordingly.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-10, 13-16, 24 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. ("An Improved Iterative Neural Network for High-Quality Image-Domain Material Decomposition in Dual-Energy CT" - Published 2020 from IDS) in view of Cao et al. (CN111062882A - Translation from Espacenet).
Regarding claim 1, Li teaches “A method implemented on
Obtaining one or more initial material density images (Li Figure 1 and page 3 left hand column paragraph 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images"); and
PNG
media_image1.png
487
813
media_image1.png
Greyscale
Li Figure 1
inputting the one or more initial material density images into a trained image processing model to obtain one or more target material density images (Li page 3 left hand column paragraph 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images");
wherein the trained image processing model is configured to simultaneously perform decomposition processing and at least one of noise reduction processing or artifact removal processing (Li page 3 right hand column paragraph 2 "The proposed CNN in (1) and Fig. 1 consists of an individual encoding-decoding architecture for each material image, and crossover architectures between different material images. […] The crossover architecture is expected to be useful to remove noise and artifacts in material images") on the one or more initial material density images (Li Figure 1 and page 2 right hand column paragraph 5 "Each iteration of BCD-Net for DECT material decomposition consists of an MBID module and an image refining module").”
However, Li does not teach explicitly teach “at least one machine each of which has at least one processor and at least one storage device for image processing”.
Cao teaches “A method implemented on at least one machine each of which has at least one processor and at least one storage device for image processing (Cao paragraph [0096] "The components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, system memory 28, and bus 18 connecting different system components (including system memory 28 and processing unit 16)")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine an
image processing method using decomposition and noise reduction/artifact removal taught as by Li use a system for image processing using decomposition and noise reduction containing processors and storage devices as taught by Cao.
The suggestion/motivation for doing so would have been that there is a need in
the field of medical imaging of a system for effectively obtaining material density information, " "In existing technologies, when processing projection data from energy spectrum CT, the projection data is usually directly decomposed into matrix material, and then the result of matrix material decomposition is denoised. However, the results of matrix material decomposition are highly correlated with noise and artifacts, which limits the processing effectiveness of energy spectrum projection data" as noted by the Cao disclosure in paragraph 5.
Therefore, it would have been obvious to combine the disclosure of Li with
the Cao disclosure to obtain the invention as specified in claim 1 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Regarding claim 2, the combination of Li and Cao teaches “The method of claim 1, wherein the trained image processing model includes a neural network unit for performing the at least one of the noise reduction processing or the artifact removal processing on the one or more initial material density images (Li page 3 right hand column paragraph 2 "The proposed CNN in (1) and Fig. 1 consists of an individual encoding-decoding architecture for each material image, and crossover architectures between different material images. When n = m, the filters in (1) form the individual encoding decoding architecture that captures individual properties of the mth material, whereas when n
≠
m, these comprise the crossover architecture that exchanges information between two material images. The crossover architecture is expected to be useful to remove noise and artifacts in material images").”
Regarding claim 3, the combination of Li and Cao teaches “The method of claim 2, wherein the neural network unit includes an image conversion model generated based on convolutional neural network (Li page 3 right hand column paragraph 1 "The first box in Fig. 1 shows the architecture of iteration-wise distinct cross-material CNN s. We will train distinct cross-material CNNs at each iteration to maximize the refinement performance" and page 3 left hand column paragraph 2 "
Θ
ⅈ
denotes a set of parameters of image refining module at the ith iteration").”
Regarding claim 4, the combination of Li and Cao teaches “The method of claim 2, wherein the trained image processing model further includes an iterative decomposition unit for iteratively decomposing and updating the one or more initial material density images (Li Figure 1 and page 3 left hand column paragraph 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images").”
Regarding claim 5, the combination of Li and Cao teaches “The method of claim 4, wherein the iterative decomposition unit includes an objective function used to obtain variate values when a sum of a data fidelity term and a data penalty term reaches a minimum value (Cao Equation (From non-translated version - page 2) paragraph [0052] "Wherein, γ<sub>1</sub> is the regularization coefficient of the first substance, γ<sub>2</sub> is the regularization coefficient of the second substance, u<sub>x1</sub> is the gradient of the first substance along the x-direction, u<sub>y1</sub> is the gradient of the first substance along the y-direction, u<sub>x2</sub> is the gradient of the second substance along the xdirection, u<sub>y2</sub> is the gradient of the second substance along the y-direction, λ<sub>h</sub> is the high-energy data fidelity coefficient, γ<sub>l</sub> is the low-energy data fidelity coefficient, and E is the objective function" and [0054] "the objective function in the target decomposition algorithm can be iterated multiple times to make the objective function converge" and paragraph [0067] "a convergence condition can be preset. The convergence condition can be a minimum value").”
PNG
media_image2.png
102
677
media_image2.png
Greyscale
Cao Equation
The proposed combination as well as the motivation for combining Li and Cao references presented in the rejection of claim 1, applies to claim 5. Finally the method recited in claim 5 is met by Li and Cao.
Regarding claim 6, the combination of Li and Cao teaches “The method of claim 5, wherein the data penalty term is determined based on the neural network unit (Li page 7 paragraph 2 "For individual material BCD-Nets, we set initial thresholding parameters before applying the exponential function as log(0.88) and log(0.8) for water and bone, respectively; for cross-material BCD-Nets, we set them as log(0.88). The regularization parameter
β
balances data-fit term and the prior estimate from image refining module").”
Regarding claim 7, the combination of Li and Cao teaches “The method of claim 4, wherein for one or more iterations of the at least one of the noise reduction processing or the artifact removal processing (Li page 3 right hand column paragraph 2 "The proposed CNN in (1) and Fig. 1 consists of an individual encoding-decoding architecture for each material image, and crossover architectures between different material images. […] The crossover architecture is expected to be useful to remove noise and artifacts in material images"), an input of the neural network unit in an initial iteration of the one or more iterations includes the one or more initial material density images (Li Figure 1 and page 3 left hand column page 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images"); and
an input of the neural network unit in an m-th iteration of the one or more iterations includes one or more updated material density images determined by the iterative decomposition unit in an (m-1 )th iteration of the one or more iterations, m being a positive integer greater than 1(Li Figure 1 and page 3 left hand column paragraph 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images").”
Regarding claim 8, the combination of Li and Cao teaches “The method of claim 4, wherein for one or more iterations of the decomposition processing, an input of the iterative decomposition unit in an nth iteration of the one or more iterations includes one or more material decomposition images output by the neural network unit in the nth iteration, n being a positive integer (Li Figure 1 and page 2 right hand column paragraph 5 "Each iteration of BCD-Net for DECT material decomposition consists of an MBID module and an image refining module" and page 5 right hand column paragraph 1 "These refined images are then fed into the MBID module").”
Regarding claim 9, the combination of Li and Cao teaches “The method of claim 1, wherein the obtaining one or more initial material density images comprises: obtaining one or more images to be processed (Cao paragraph [0040] "S110, acquire the energy spectrum projection data of the target object"); and
determining the one or more initial material density images based on the one or more images to be processed (Li Figure 1 and page 3 left hand column paragraph 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images").”
The proposed combination as well as the motivation for combining Li and Cao references presented in the rejection of claim 1, applies to claim 5. Finally the method recited in claim 5 is met by Li and Cao.
Regarding claim 10, the combination of Li and Cao teaches “The method of claim 1, wherein the trained image processing model is further configured to simultaneously perform the decomposition processing and the at least one of the noise reduction processing or the artifact removal processing (Li page 3 right hand column paragraph 2 "The proposed CNN in (1) and Fig. 1 consists of an individual encoding-decoding architecture for each material image, and crossover architectures between different material images. […] The crossover architecture is expected to be useful to remove noise and artifacts in material images") on the one or more initial material density images using an iterative operation (Li Figure 1 and page 2 right hand column paragraph 5 "Each iteration of BCD-Net for DECT material decomposition consists of an MBID module and an image refining module").”
Regarding claim 13, the combination of Li and Cao teaches “The method of claim 1, wherein the trained image processing model is obtained according to a process including: updating model parameters of an initial image conversion model (Li page 5 Algorithm 1 and right hand column paragraph 1 "Once we obtain the learned filters and thresholding values, we apply them to refine material images"); and
iteratively adjusting the model parameters of the initial image conversion model to obtain the trained image processing model (Li Algorithm 1 and page 5 left hand column paragraph 4 "The training process at the ith iteration requires L input output image pairs" and right hand column paragraph 1 "Once we obtain the learned filters and thresholding values, we apply them to refine material images").”
Regarding claim 14, the combination of Li and Cao teaches “The method of claim 13, wherein the updating model parameters of an initial image conversion model, including: Obtaining a first training set, the first training set including one or more sample pairs, each of the one or more sample pairs including a sample initial material density image and a corresponding label material density image (Li Algorithm 1 and page 5 left hand column paragraph 4 "The training process at the ith iteration requires L input output image pairs. Input labels are decomposed material images via MBID module,
X
l
,
m
i
-
1
:
l
=
1
…
,
L
}
, and output labels are high-quality reference material images
X
l
,
m
:
l
=
1
…
,
L
}
"); and
Training the initial image conversion model using the first training set to update the model parameters of the initial image conversion model (Li Algorithm 1 and page 5 left hand column paragraph 4 "The training process at the ith iteration requires L input output image pairs" and right hand column paragraph 1 "Once we obtain the learned filters and thresholding values, we apply them to refine material images. These refined images are then fed into the MBID module").”
Regarding claim 15, the combination of Li and Cao teaches “The method of claim 13, wherein the iteratively adjusting the model parameters of the initial image conversion model, including: inputting the one or more sample initial material density images into an updated image conversion model to obtain one or more sample material decomposition images (Li Algorithm 1 and page 5 left hand column paragraph 4 "The training process at the ith iteration requires L input output image pairs. Input labels are decomposed material images via MBID module,
X
l
,
m
i
-
1
:
l
=
1
…
,
L
}
, and output labels are high-quality reference material images
X
l
,
m
:
l
=
1
…
,
L
}
");
updating the one or more sample initial material density images using the iterative decomposition unit based on the one or more sample material decomposition images (Li page 5 right hand column paragraph 1 "Once we obtain the learned filters and thresholding values, we apply them to refine material images. These refined images are then fed into the MBID module"); and
determining a second training set based on one or more updated sample initial material density images and one or more corresponding label material density images, and further training the updated image conversion model using the second training set to further adjust the model parameters of the initial image conversion model (Li Algorithm 1 and page 5 right hand column paragraph 1 "Once we obtain the learned filters and thresholding values, we apply them to refine material images. These refined images are then fed into the MBID module").”
Regarding claim 16, the combination of Li and Cao teaches “The method of claim 15, wherein the iteratively adjusting the model parameters of the initial image conversion model to obtain the trained image processing model further including:
taking the one or more updated sample initial material density images as inputs of the updated image conversion model, and iteratively adjusting the model parameters of the initial image conversion model until a second iteration termination condition is satisfied to obtain the trained image processing model (Li Algorithm 1 and page 5 left hand column paragraph 4 "The training process at the ith iteration requires L input output image pairs").”
Claim 24 recites a computer readable medium including computer executable instructions corresponding to the steps of the method recited in claim 1. Therefore, the recited instructions of the computer readable medium of claim 24 are mapped to the proposed combination in the same manner as the corresponding steps of the method claim 1. Additionally, the rationale and motivation to combine Li and Cao presented in rejection of claim 1, apply to this claim. The combination of Li and Cao teaches “A non-transitory computer readable medium storing instructions, the instructions, when executed by at least one processor, causing the at least one processor to implement a method (Cao paragraph [0099] "disk drive for reading and writing to a removable non-volatile disk (e.g., a "floppy disk") and an optical disc drive for reading and writing to a removable non-volatile optical disc (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided […] The memory 28 may include at least one program product having a set of program modules (e.g., an energy spectrum projection data acquisition module 51, a target decomposition algorithm generation module 52, and a decomposition data determination module 53 of an energy spectrum projection data processing device) configured to perform the functions of various embodiments of the present invention")”.
Claim 26 recites a apparatus with elements corresponding to the steps of the method recited in claim 1. Therefore, the recited elements of claim 26 are mapped to the proposed combination in the same manner as the corresponding steps of method
claim 1. Additionally, the rationale and motivation to combine the Li and Cao
references, presented in rejection of claim 1 apply to this claim. The combination of Li and Cao teaches “a scanner configured to obtain one or more images to be processed (Cao paragraph [0004] "Projection data under different energy spectra can be obtained through energy spectrum CT scanning"); and
an image processing device configured to perform operations (Cao paragraph [0096] " The components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 16, system memory 28, and bus 18 connecting different system components (including system memory 28 and processing unit 16)").”
Claims 11-12 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Li and Cao in view of Wang et al. (US 2020/0302265 A1).
Regarding claim 11, the combination of Li and Cao teaches “The method of claim 10, wherein the one or more target material density images include material density images output by the trained image processing model when a first iteration termination condition is satisfied (Li Figure 1 and page 3 left hand column paragraph 2 "The ith image refining module of BCD-Net takes
x
n
i
-
1
ε
R
N
:
m
=
1
,
2
, decomposed material images at the ( i - 1) th iteration, and outputs refined material images
z
m
i
ε
R
N
:
m
=
1
,
2
, for I = 1,…, I, where N is the number of pixels of each material image, and I is the number of BCD-Net iterations")”; and the first iteration termination condition includes that a count of iteration times of the iterative operation reaches a first preset count (Li page Figure 1 and page 3 left hand column paragraph 2 "I is the number of BCD-Net iterations"),
However, the combination of Li and Cao does not teach “a value of a first loss function in the iterative operation is less than or equal to a first preset loss function threshold.”
Wang teaches “a value of a first loss function in the iterative operation is less than or equal to a first preset loss function threshold (Wang paragraph [0162] "iteration threshold may be a quantity of iteration times that is preset by the training device, for example, 10000 times or 20000 times. The loss threshold may be preset by the training device. If a difference between a real result and an image processing result output by the convolutional neural network is less than the loss threshold, training ends").”
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine an
image processing method using decomposition and noise reduction/artifact removal taught as by Li and Cao to use early stopping of iterations based on a loss function reaching a threshold as taught by Wang.
The suggestion/motivation for doing so would have been that it is well known in neural networks to utilize loss thresholds, in the field of image processing the benefits of using such methods is known, "A final study objective of the computer vision is to make a computer observe and understand the world through vision in a way that human beings do, and have a capability of automatically adapting to an environment" as noted by the Wang disclosure in paragraph 2.
Therefore, it would have been obvious to combine the disclosure of Li and Cao with the Wang disclosure to obtain the invention as specified in claim 11 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Regarding claim 12, the combination of Li, Cao, and Wang teaches “The method of claim 11, wherein the first loss function includes one or more differences between two groups of material decomposition images determined by the image processing model in two adjacent iterations respectively, or one or more differences between two groups of updated material density images determined by the trained image processing model in two adjacent iterations (Want paragraph [0105] "In a convolutional neural network training process, an output of a convolutional neural network is expected to be as close to a really wanted value as possible. Therefore, a current predicted value of a network may be compared with a really wanted target value, and then a weight vector of each layer of neural network may be updated based on a difference between the current predicted value of the network and the really wanted target value" and paragraph [0107] "an error loss is produced in a feed-forward process from inputting to outputting of a signal, and the parameter in the original convolutional neural network is updated through backpropagation of error loss information, so as to converge the error loss').”
The proposed combination as well as the motivation for combining Li, Cao, and Wang references presented in the rejection of claim 11, applies to claim 12. Finally the method recited in claim 12 is met by Li, Cao, and Wang.
Regarding claim 17, the combination of Li, Cao, and Wang teaches “The method of claim 16, wherein the second iteration termination condition includes that a count of iteration times reaches a second preset count (Li Algorithm 1 and page 5 left hand column paragraph 4 "The training process at the ith iteration requires L input output image pairs"), and/or a value of a second loss function is less than or equal to a second preset loss function threshold (Wang paragraph [0162] "iteration threshold may be a quantity of iteration times that is preset by the training device, for example, 10000 times or 20000 times. The loss threshold may be preset by the training device. If a difference between a real result and an image processing result output by the convolutional neural network is less than the loss threshold, training ends").”
The proposed combination as well as the motivation for combining Li, Cao, and Wang references presented in the rejection of claim 11, applies to claim 17. Finally the method recited in claim 17 is met by Li, Cao, and Wang.
Regarding claim 18, the combination of Li, Cao, and Wang teaches “The method of claim 17, wherein the second loss function includes one or more differences between two groups of sample material decomposition images obtained in two adjacent iterations respectively in an image processing model training process, or one or more differences between two groups of updated sample initial material density images obtained in two adjacent iterations respectively in the image processing model training process (Want paragraph [0105] "In a convolutional neural network training process, an output of a convolutional neural network is expected to be as close to a really wanted value as possible. Therefore, a current predicted value of a network may be compared with a really wanted target value, and then a weight vector of each layer of neural network may be updated based on a difference between the current predicted value of the network and the really wanted target value" and paragraph [0107] "an error loss is produced in a feed-forward process from inputting to outputting of a signal, and the parameter in the original convolutional neural network is updated through backpropagation of error loss information, so as to converge the error loss').”
The proposed combination as well as the motivation for combining Li, Cao, and Wang references presented in the rejection of claim 11, applies to claim 18. Finally the method recited in claim 18 is met by Li, Cao, and Wang.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASPREET KAUR/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662