DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a continuation of the International Application No. PCT/CN2023/070642, filed on January 05, 2023. As well as acknowledgement of Applicant’s claim of priority to and the benefit of CN202211067037.7, with filing date of September 01, 2022, and CN202210008392.0, with filing date of January 05, 2022. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(a)-(d) or (f).
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 05/06/2024, 02/27/2025, ad 09/03/2025 have been reviewed and the listed references have been considered.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 910 and 930 shown in Figure 9, and reference 1000 shown in Figure 10. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Status of Claims
Claims 1-7, 9-12, 15-16, 21-24, 31-32, and 35 are pending. Claims 8, 13-14, 17-20, 33-34, and 36-40 are cancelled.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-5, 21-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hu et al. (US 2021/0350591 A1).
Regarding claim 1, Hu teaches “A method implemented on at least one machine (Hu paragraph [0004] "a computer-implemented method is disclosed") each of which has at least one processor and at least one storage device (Hu paragraph [0038] "The system 30 is a representative device and can include a processor subsystem 72, an input/output subsystem 74, a memory subsystem 76, a communications interface 78, and a system bus 80") for motion correction (Hu paragraph [0035] "whole-body motion field calculated from deformable registration is used in a direct parametric reconstruction to calculate one or more parameters for generating parametric diagnostic images") of a positron emission computed tomography (PET) scanned image (Hu paragraph [0036] "The first imaging modality 12 may include any suitable modality, such as, for example, a computed-tomography (CT) modality, a positron-em1ss10n tomography (PET) modality, a single-photon emission computerized tomography (SPECT) modality, etc"), comprising:
obtaining scanned images of a scanned object generated at a plurality of time points (Hu paragraph [0056] "At step 202, a set of nuclear imaging data is received. The set of nuclear imaging data may include any suitable nuclear imaging data, such as, for example, list mode nuclear imaging data generated by one or more imaging modalities 12, 14. The list mode nuclear imaging data may include a set of frames and may include continuous-bed motion (CBM) sinograms, multi-bed sinograms, or single-bed sinograms"); and
determining a parametric image by performing a correction processing on the scanned images, wherein the correction processing is configured to correct an influence of a motion of the scanned object on the scanned images (Hu paragraph [0035] "whole-body motion field calculated from deformable registration is used in a direct parametric reconstruction to calculate one or more parameters for generating parametric diagnostic images […] One or more parametric images are generated for diagnostic and/or clinical purposes").”
Regarding claim 2, Hu teaches “The method of claim 1, wherein the determining a parametric image by performing a correction processing on the scanned images includes: determining an initial time-activity curve of at least one voxel of the scanned images based on the scanned images (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function,
∫
C
p
(
t
)
is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time");
PNG
media_image1.png
478
555
media_image1.png
Greyscale
Hu Figure 6
determining a corrected time-activity curve based on the scanned images (Hu paragraph [0075] "at step 314, the image parameters, e.g., Ki, DV, slope, intercept, etc., are updated based on a linear/nonlinear fit of the updated emission frame images calculated at step 312") or the initial time-activity curve (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function"); and
determining the parametric image based on the corrected time-activity curve (Hu paragraph [0077] "At step 320, the calculated parameters for each parametric image, e.g., the Ki and DV image volumes, are output. The output parameters may be used to construct a set of parametric diagnostic images for use in diagnostic, treatment planning, and/or clinical activities" and paragraph [0079] "FIG. 11A illustrates a metabolic uptake rate (Ki) sagittal image 508a reconstructed without motion correction and FIG.11B illustrates a metabolic uptake rate (Ki) sagittal image 508b reconstructed using a whole-body motion field parametric reconstruction process, such as the process illustrated in FIG. 5").”
Regarding claim 4, Hu teaches “The method of claim 1, wherein the determining a parametric image by performing a correction processing on the scanned images includes: determining an initial time-activity curve of at least one voxel of the scanned
images based on the scanned images (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function,
∫
C
p
(
t
)
is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time"); and
determining the parametric image by inputting an input function (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function") and the initial time-activity curve (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function, ∫Cp(t) is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time") into a machine learning model (Hu paragraph [0073] "At step 410, the forward-warped correction image (i.e., the voxel values of the forward-warped correction image) is divided by forward-warped normalization factors. The normalization factors may be scanner specific and may be determined using one or more iterative processes, such as, for example, by applying an iterative machine-learning process").”
Regarding claim 5, Hu teaches “The method of claim 4, wherein the determining the parametric image by inputting an input function and the initial time-activity curve into a machine learning model includes: determining kinetic parameters (Hu paragraph [0075] "at step 314, the image parameters, e.g., Ki, DV, slope, intercept, etc., are updated based on a linear/nonlinear fit of the updated emission frame images calculated at step 312. A linear/nonlinear fit may be performed between each of the updated emission frame images, between each of the updated emission frame images and a reference image, and/or between any other suitable set of emission frame images. The linear/nonlinear fit can be an iterative process") by inputting the input function (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function") and the initial time-activity curve (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function, ∫Cp(t) is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time") into the machine learning model (Hu paragraph [0073] "At step 410, the forward-warped correction image (i.e., the voxel values of the forward-warped correction image) is divided by forward-warped normalization factors. The normalization factors may be scanner specific and may be determined using one or more iterative processes, such as, for example, by applying an iterative machine-learning process"); and
determining the parametric image based on the kinetic parameters (Hu paragraph [0077] "At step 320, the calculated parameters for each parametric image, e.g., the Ki and DV image volumes, are output. The output parameters may be used to construct a set of parametric diagnostic images for use in diagnostic, treatment planning, and/or clinical activities" and paragraph [0079] "FIG. 11A illustrates a metabolic uptake rate (Ki) sagittal image 508a reconstructed without motion correction and FIG.11B illustrates a metabolic uptake rate (Ki) sagittal image 508b reconstructed using a whole-body motion field parametric reconstruction process, such as the process illustrated in FIG. 5").”
Regarding claim 21, Hu teaches “A method implemented on at least one machine (Hu paragraph [0004] "a computer-implemented method is disclosed") each of which has at least one processor and at least one storage device (Hu paragraph [0038] "The system 30 is a representative device and can include a processor subsystem 72, an input/output subsystem 74, a memory subsystem 76, a communications interface 78, and a system bus 80") for motion correction (Hu paragraph [0035] "whole-body motion field calculated from deformable registration is used in a direct parametric reconstruction to calculate one or more parameters for generating parametric diagnostic images") of a positron emission computed tomography (PET) image (Hu paragraph [0036] "The first imaging modality 12 may include any suitable modality, such as, for example, a computed-tomography (CT) modality, a positron-em1ss10n tomography (PET) modality, a single-photon emission computerized tomography (SPECT) modality, etc"), comprising:
determining an initial time-activity curve of at least one voxel of scanned
images based on the scanned images (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function,
∫
C
p
(
t
)
is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time"); and
PNG
media_image2.png
239
278
media_image2.png
Greyscale
Hu Figure 6
at least one of: determining a corrected time-activity curve based on the scanned images (Hu paragraph [0075] "at step 314, the image parameters, e.g., Ki, DV, slope, intercept, etc., are updated based on a linear/nonlinear fit of the updated emission frame images calculated at step 312") or the initial time-activity curve, and determining a parametric image based on the corrected time-activity curve (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function"); or
determining the parametric image by inputting an input function (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function") and the initial time-activity curve (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function, ∫Cp(t) is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time") into a machine learning model (Hu paragraph [0073] "At step 410, the forward-warped correction image (i.e., the voxel values of the forward-warped correction image) is divided by forward-warped normalization factors. The normalization factors may be scanner specific and may be determined using one or more iterative processes, such as, for example, by applying an iterative machine-learning process").
Regarding claim 22, Hu teaches “The method of claim 21, wherein the determining the parametric image by inputting an input function and the initial time-activity curve into a machine learning model includes: determining kinetic parameters (Hu paragraph [0075] "at step 314, the image parameters, e.g., Ki, DV, slope, intercept, etc., are updated based on a linear/nonlinear fit of the updated emission frame images calculated at step 312. A linear/nonlinear fit may be performed between each of the updated emission frame images, between each of the updated emission frame images and a reference image, and/or between any other suitable set of emission frame images. The linear/nonlinear fit can be an iterative process") by inputting the input function (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function") and the initial time-activity curve (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function, ∫Cp(t) is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time") into the machine learning model (Hu paragraph [0073] "At step 410, the forward-warped correction image (i.e., the voxel values of the forward-warped correction image) is divided by forward-warped normalization factors. The normalization factors may be scanner specific and may be determined using one or more iterative processes, such as, for example, by applying an iterative machine-learning process"); and
determining the parametric image based on the kinetic parameters (Hu paragraph [0077] "At step 320, the calculated parameters for each parametric image, e.g., the Ki and DV image volumes, are output. The output parameters may be used to construct a set of parametric diagnostic images for use in diagnostic, treatment planning, and/or clinical activities" and paragraph [0079] "FIG. 11A illustrates a metabolic uptake rate (Ki) sagittal image 508a reconstructed without motion correction and FIG.11B illustrates a metabolic uptake rate (Ki) sagittal image 508b reconstructed using a whole-body motion field parametric reconstruction process, such as the process illustrated in FIG. 5").”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3, 6-7, 9, 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Hu, in view of Shanbhag et al. (US 2017/0042496 A1).
Regarding claim 3 (similarly claim 23), Hu teaches “The method of claim 2, wherein the determining a corrected time-activity curve based on the scanned images or the initial time-activity curve includes:
(Hu paragraph [0075] "at step 314, the image parameters, e.g., Ki, DV, slope, intercept, etc., are updated based on a linear/nonlinear fit of the updated emission frame images calculated at step 312") using a machine learning model (Hu paragraph [0073] "At step 410, the forward-warped correction image (i.e., the voxel values of the forward-warped correction image) is divided by forward-warped normalization factors. The normalization factors may be scanner specific and may be determined using one or more iterative processes, such as, for example, by applying an iterative machine-learning process").”
However, Hu does not explicitly teach “determining a target region based on a target voxel of the scanned images; and determining, based on the scanned images or the initial time-activity curve of the at least one voxel in the target region”.
Shanbhag teaches “determining a target region (Shanbhag paragraph [0072] "in the method of FIG. 3, regions of interest 302 corresponding to the processed images 208 may be obtained") based on a target voxel of the scanned images (Shanbhag paragraph [0073] "at step 304, signal characteristics or signal intensity data corresponding to each valid voxel in a given ROI 302 across time may be obtained"); and
determining, based on the scanned images or the initial time-activity curve of
the at least one voxel in the target region (Shanbhag paragraph [0073] "at step 304, signal characteristics or signal intensity data corresponding to each valid voxel in a given ROI 302 across time may be obtained")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a method for motion correction and parametric image reconstruction of a PET image as taught by Hu to use a method for evaluation of motion correction as taught by Shanbhag.
The suggestion/motivation for doing so would have that “Currently, the evaluation of the efficacy of motion correction is typically performed visually. Some techniques for the evaluation of motion correction entails comparing time series data corresponding to a region of interest (ROI), while certain other techniques call for an assessment of the degree of dispersion of time-series data corresponding to a given ROI. Furthermore, one or more structures in the ROI may be observed to evaluate the motion correction. In addition, the motion correction may be evaluated via use of difference images. However, use of difference images is unsuitable for quantifying any improvement due to motion correction since contrast related signal changes can confound motion related changes. Therefore, the evaluation of motion correction using difference images is at best qualitative in nature and hinders comparison of motion correction efficacy across different sites or vendors” as noted by the Shanbhag disclosure in paragraph 8.
Therefore, it would have been obvious to combine the disclosure of Hu with
the Shanbhag disclosure to obtain the invention as specified in claim 3 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Regarding claim 6 (similarly claim 24), the combination of Hu and Shanbhag teaches “The method of claim 4, wherein the determining the parametric image by inputting an input function and the initial time-activity curve into a machine learning model includes: determining a target region (Shanbhag paragraph [0072] "in the method of FIG. 3, regions of interest 302 corresponding to the processed images 208 may be obtained") based on a target voxel of the scanned images (Shanbhag paragraph [0073] "at step 304, signal characteristics or signal intensity data corresponding to each valid voxel in a given ROI 302 across time may be obtained"); and
determining, based on the input function (Hu paragraph [0067] "An iterative loop 308 is applied to each voxel in an image independently. For each voxel, at step 310, a frame emission image is calculated using a predetermined equation based on the initialized parameters ( or parameters from a prior iteration as discussed below), such as, for example, a Patlak equation using Ki and DV. In some embodiments, the target parameters, e.g., Ki and DV, are calculated based on the activity, x(t) at each voxel, where:
x
t
=
k
i
∫
0
t
C
P
s
ⅆ
S
+
D
v
C
P
s
where DV and Ki are parameters in a voxel space and CP is the blood input function") and the initial time-activity curve of the at least one voxel in the target region (Hu Figure 6 and paragraph [0065] "At step 304, the blood input function, Cp(t) and an integral of the blood input function, ∫Cp(t) is calculated for each frame or axial slice in the acquired data. Each frame specific blood input function, Cp(t), is represented as a curve over time"), kinetic parameters (Hu paragraph [0075] "at step 314, the image parameters, e.g., Ki, DV, slope, intercept, etc., are updated based on a linear/nonlinear fit of the updated emission frame images calculated at step 312") and/or the parametric image of the target voxel using the machine learning model (Hu paragraph [0073] "At step 410, the forward-warped correction image (i.e., the voxel values of the forward-warped correction image) is divided by forward-warped normalization factors. The normalization factors may be scanner specific and may be determined using one or more iterative processes, such as, for example, by applying an iterative machine-learning process").”
The proposed combination as well as the motivation for combining Hu and Shanbhag references presented in the rejection of claim 6, applies to claim 3. Finally the method recited in claim 6 is met by Hu and Shanbhag.
Regarding claim 7, the combination of Hu and Shanbhag teaches “The method of claim 3, wherein the target region includes the target voxel at a central position of the target region and adjacent voxels of the target voxel (Shanbhag paragraph [0073] "at step 304, signal characteristics or signal intensity data corresponding to each valid voxel in a given ROI 302 across time may be obtained. In particular, for each valid voxel, time-series signal curves or characteristics that correspond to valid voxels from a determined neighborhood may be accumulated. In one example, the determined neighborhood may include a 3x3x3 neighborhood that surrounds a given valid voxel").”
The proposed combination as well as the motivation for combining Hu and Shanbhag references presented in the rejection of claim 7, applies to claim 3. Finally the method recited in claim 7 is met by Hu and Shanbhag.
Regarding claim 9, the combination of Hu and Shanbhag teaches “The method of claim 1, further including: obtaining motion information of the scanned object at the plurality of time points (Hu paragraph [0056] "At step 202, a set of nuclear imaging data is received. The set of nuclear imaging data may include any suitable nuclear imaging data, such as, for example, list mode nuclear imaging data generated by one or more imaging modalities 12, 14. The list mode nuclear imaging data may include a set of frames and may include continuous-bed motion (CBM) sinograms, multi-bed sinograms, or single-bed sinograms"); and
determining a quality evaluation result by performing a quality evaluation on the motion information (Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."").”
The proposed combination as well as the motivation for combining Hu and Shanbhag references presented in the rejection of claim 9, applies to claim 3. Finally the method recited in claim 9 is met by Hu and Shanbhag.
Claims 10-11, 15-16, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Hu and Shanbhag, in view of Woo et al. ("Development of Event-based Motion Correction Techniquie for PET Study Using List-mode Acquisition and Optical Motion Tracking System" - Published 2003).
Regarding claim 10, the combination of Hu and Shanbhag teaches “The method of claim 9, wherein the determining a quality evaluation result by performing a quality evaluation on the motion information includes:
determining the quality evaluation result based on the position information to be evaluated at the each time point (Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."").”
However, the combination of Hu and Shanbhag does not explicitly teach “determining position information to be evaluated at each time point of the plurality of time points by processing, based on the motion information, a line of response of the scanned object”.
Woo teaches “determining position information to be evaluated at each time point of the plurality of time points by processing, based on the motion information (Woo page 2 paragraph 3 "In order to track the head motion, we used a commercially available optical tracking system (POLARIS) to monitor head motion. The POLARIS system has two digital cameras as shown in Fig. 1(a), and can detect locations and orientations (six degrees of freedom, 6DOF) of multiple targets"), a line of response of the scanned object (Woo page 2 paragraph 1 "We used a list-mode for PET data acquisition. In list-mode acquisition, every pair of annihilation photons detected by the PET detector is recorded. Every line of response (LOR) has information of the detector pair and the time of detection")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a method for motion correction and parametric image reconstruction of a PET image and quality evaluation of the motion correction as taught by Hu and Shanbhag to use a two-step method of motion correction as taught by Woo.
The suggestion/motivation for doing so would have that “Compared to the conventional frame-mode, list-mode data acquisition has the advantages of higher data storage efficiency, higher temporal resolution and higher data manipulation flexibility. In this paper, we present a motion correction technique directly to correct the head motion in event-by-event base during PET scanning using the list-mode acquisition and optical motion tracking system” as noted by the Woo disclosure on page 2 paragraph 1.
Therefore, it would have been obvious to combine the disclosure of Hu and Shanbhag with the Woo disclosure to obtain the invention as specified in claim 10 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Regarding claim 11, the combination of Hu, Shanbhag, and Woo teaches “The method of claim 10, wherein the determining position information to be evaluated at each time point of the plurality of time points includes: relocating the line of response based on the motion information at the each time point to obtain a relocated line of response (Woo Figure 3 and page 3 paragraph 1 "Acquired list-mode event (indicates location of a pair of two detectors) data was realigned to a new location using the same time of 6DOF POLARIS motion data"); and
PNG
media_image3.png
305
661
media_image3.png
Greyscale
Woo Figure 3
generating the position information to be evaluated (Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."") according to the relocated line of response (Woo Figure 3 and page 3 paragraph 1 "Acquired list-mode event (indicates location of a pair of two detectors) data was realigned to a new location using the same time of 6DOF POLARIS motion data").”
The proposed combination as well as the motivation for combining Hu, Shanbhag, and Woo references presented in the rejection of claim 11, applies to claim 10. Finally the method recited in claim 11 is met by Hu, Shanbhag, and Woo.
Regarding claim 15, the combination of Hu, Shanbhag, and Woo teaches “The method of claim 11, wherein the position information to be evaluated includes a second index, the second index being an angle change of the target region at the each time point relative to an initial position, the initial position being determined based on a reference frame of the scanned images (Woo page 3 paragraph 2 "The generated Michelogram data were then directly modified according to the POLARIS motion data [9]. Three dimensional location of the two detectors can be expressed as follows for each LOR,
D
→
A
and
D
→
B
[...] where
D
A
and
D
B
are detector numbers in the acquired event data, s is the distance between the LOR and the center of the tomograph, R is the inner radius of the tomograph (27.7 cm),
∅
is sthe azimuth of the LOR").”
The proposed combination as well as the motivation for combining Hu, Shanbhag, and Woo references presented in the rejection of claim 15, applies to claim 10. Finally the method recited in claim 15 is met by Hu, Shanbhag, and Woo.
Regarding claim 16, the combination of Hu, Shanbhag, and Woo teaches “The method of claim 10, wherein the determining the quality evaluation result based on the position information to be evaluated at the each time point includes: determining an evaluation index at the each time point based on the position information to be evaluated at the each time point (Shanbhag paragraph [0081] "lower value of the LDM is generally indicative of better alignment of the signal data at different time points in the given ROI, while a higher value of the LDM is generally indicative of poor alignment of the signal data at different time points in the given ROI"); and
determining one or more abnormal features based on a difference between the evaluation indexes at adjacent time points, wherein the one or more abnormal features (Shanbhag paragraph [0081] "lower value of the LDM is generally indicative of better alignment of the signal data at different time points in the given ROI, while a higher value of the LDM is generally indicative of poor alignment of the signal data at different time points in the given ROI") reflect the quality evaluation result (Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."").”
Regarding claim 31, the combination of Hu, Shanbhag, and Woo teaches “A method (Hu paragraph [0004] "a computer-implemented method is disclosed") implemented on at least one machine each of which has at least one processor and at least one storage device (Hu paragraph [0038] "The system 30 is a representative device and can include a processor subsystem 72, an input/output subsystem 74, a memory subsystem 76, a communications interface 78, and a system bus 80") for quality evaluation of motion information (Shanbhag paragraph [0022] “Systems and methods for the automated evaluation and/or quantification of motion correction presented hereinafter enhance clinical workflow by robustly evaluating the efficacy of the motion correction”) during a positron emission computed tomography (PET) scan (Hu paragraph [0036] "The first imaging modality 12 may include any suitable modality, such as, for example, a computed-tomography (CT) modality, a positron-em1ss10n tomography (PET) modality, a single-photon emission computerized tomography (SPECT) modality, etc"), comprising:
collecting, based on a preset sampling frequency (Woo page 2 paragraph 3 "This system has an RS-232C interface, and can measure motions up to 20 Hz"), list mode data (Woo page 2 paragraph 1 "We used a list-mode for PET data acquisition. In list-mode acquisition, every pair of annihilation photons detected by the PET detector is recorded. Every line of response (LOR) has information of the detector pair and the time of detection") and motion information of a region of interest (ROI) of a scanned object during the PET scan (Woo page 2 paragraph 3 "In order to track the head motion, we used a commercially available optical tracking system (POLARIS) to monitor head motion. The POLARIS system has two digital cameras as shown in Fig. 1(a), and can detect locations and orientations (six degrees of freedom, 6DOF) of multiple targets");
relocating a line of response in the list mode data based on the motion information at each sampling time point to obtain a relocated line of response (Woo Figure 3 and page 3 paragraph 1 "Acquired list-mode event (indicates location of a pair of two detectors) data was realigned to a new location using the same time of 6DOF POLARIS motion data");
generating position information to be evaluated (Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."") of the ROI at the each sampling time point according to the relocated line of response (Woo Figure 3 and page 3 paragraph 1 "Acquired list-mode event (indicates location of a pair of two detectors) data was realigned to a new location using the same time of 6DOF POLARIS motion data"); and
generating a quality evaluation result of the motion information according to a preset quality evaluation index (Shanbhag paragraph [0081] "lower value of the LDM is generally indicative of better alignment of the signal data at different time points in the given ROI, while a higher value of the LDM is generally indicative of poor alignment of the signal data at different time points in the given ROI") and the position information to be evaluated within a preset sampling time period (Woo Figure 3 and page 3 paragraph 1 "Acquired list-mode event (indicates location of a pair of two detectors) data was realigned to a new location using the same time of 6DOF POLARIS motion data").”
The proposed combination as well as the motivation for combining Hu, Shanbhag, and Woo references presented in the rejection of claim 31, applies to claim 10. Finally the method recited in claim 31 is met by Hu, Shanbhag, and Woo.
Claims 12 is rejected under 35 U.S.C. 103 as being unpatentable over Hu, Shanbhag, and Woo, in view of Herraiz et al. ("Sensitivity estimation in time-of-flight list-mode positron emission tomography" - Published 2015).
Regarding claim 12, the combination of Hu, Shanbhag, and Woo teaches “The method of claim 11, wherein the generating the position information to be evaluated according to the relocated line of response includes: determining back-projection data by performing, according to the relocated line of response, a back-projection on the target region in one of the scanned images at the each time point (Woo page 4 paragraph 2 "For image reconstruction, we used the filtered back-projection (FBP) algorithm with a 6.0 mm Gaussian filter. Reconstructed image size was 128×128 pixels (0.7 mm for each pixel) in plane and 47 slices");
(Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."") according to the corrected data (Woo Figure 3 and page 3 paragraph 1 "Acquired list-mode event (indicates location of a pair of two detectors) data was realigned to a new location using the same time of 6DOF POLARIS motion data").
However, the combination of Hu, Shanbhag, and Woo does not teach “determining corrected data by performing a sensitivity correction on the back-projection data”.
Herraiz teaches “determining corrected data by performing a sensitivity correction on the back-projection data (Herraiz page 3 right hand column paragraph 5 and page 4 left hand column paragraph 1 and 2 "1. For each LOR for which
g
k
>
0
, compute the projection
Σ
i
=
0
T
a
k
i
c
i
/
ε
-
i
. This projection uses the same system matrix employed for the estimation of
C
i
, and it does not contain normalization or attenuation factors. Note that in this case, we do not make use of the TOF information, as we compute the sum over all the TOF bins of the LOR k. 2. Find an estimate of the factor ak using Eq. (9). Create a new estimation of
ε
-
; by backward projecting the values of
a
k
[Eq. (10)]. The backward projection is again performed without TOF. 4. Go back to step 1. Repeat R times. After R iterations of this algorithm, we end up with an estimator of the sensitivity
ε
-
; (to the accuracy of multiplicative constant). Then, with the new sensitivity estimate, we can continue with the image reconstruction, performing new iterations for obtaining a better estimation of
C
i
;'s using the origin ensemble (OE) algorithm.30 After that, another R iterations of the proposed algorithm for sensitivity estimation can be performed. Therefore, the proposed method consists on alternating reconstructions of the image
C
i
; for a fixed sensitivity
ε
-
; and vice versa. Note that steps 2 and 3 can be performed together [using Eq. (10)], and no explicit computation of
a
k
is actually needed with this algorithm")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a method for motion correction and parametric image reconstruction of a PET image and quality evaluation of the motion correction as taught by Hu, Shanbhag, and Woo to include sensitivity correction as taught by Herraiz.
The suggestion/motivation for doing so would have that “Positron emission tomography (PET) is a molecular imaging technique that provides quantitative information of the biodistribution of an administered radiotracer in the subject under study. This quantitative information is relevant in many cases, 1- 3 and it is one of the main advantages of PET over other imaging techniques. In order to obtain this quantitative information, some calibrations and corrections are required,4 being the sensitivity correction the most significant one in most cases. The sensitivity of a particular voxel in a PET scanner is defined as the probability that a positron emitted in that voxel is finally detected as a coincidence in a pair of detectors in the scanner” as noted by the Herraiz disclosure on page 1 paragraph 1.
Therefore, it would have been obvious to combine the disclosure of Hu, Shanbhag, and Woo with the Herraiz disclosure to obtain the invention as specified in claim 10 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claims 32 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Hu, Shanbhag, Woo, and Herraiz, in view Miyajima et al. (CN113194834A - Translation from Espacenet).
Regarding claim 32, the combination of Hu, Shanbhag, Woo, and Herraiz teaches “The method of claim 31, wherein the position information to be evaluated includes (Woo page 3 paragraph 2 "The generated Michelogram data were then directly modified according to the POLARIS motion data [9]. Three dimensional location of the two detectors can be expressed as follows for each LOR,
D
→
A
and
D
→
B
[...] where
D
A
and
D
B
are detector numbers in the acquired event data, s is the distance between the LOR and the center of the tomograph, R is the inner radius of the tomograph (27.7 cm),
∅
is sthe azimuth of the LOR") of the ROI (Woo page 2 paragraph 3 "In order to track the head motion, we used a commercially available optical tracking system (POLARIS) to monitor head motion"); and
before obtaining the (Woo page 3 paragraph 2 "The generated Michelogram data were then directly modified according to the POLARIS motion data [9]. Three dimensional location of the two detectors can be expressed as follows for each LOR,
D
→
A
and
D
→
B
[...] where
D
A
and
D
B
are detector numbers in the acquired event data, s is the distance between the LOR and the center of the tomograph, R is the inner radius of the tomograph (27.7 cm),
∅
is sthe azimuth of the LOR"), the method further includes:
performing, in the ROI, a back-projection on each relocated line of response at the each sampling time point to obtain back-projection points (Woo page 4 paragraph 2 "For image reconstruction, we used the filtered back-projection (FBP) algorithm with a 6.0 mm Gaussian filter. Reconstructed image size was 128×128 pixels (0.7 mm for each pixel) in plane and 47 slices");
performing a sensitivity correction on each of the back-projection points to obtain corrected back-projection points (Herraiz page 3 right hand column paragraph 5 and page 4 left hand column paragraph 1 and 2 "1. For each LOR for which
g
k
>
0
, compute the projection
Σ
i
=
0
T
a
k
i
c
i
/
ε
-
i
. This projection uses the same system matrix employed for the estimation of
C
i
, and it does not contain normalization or attenuation factors. Note that in this case, we do not make use of the TOF information, as we compute the sum over all the TOF bins of the LOR k. 2. Find an estimate of the factor ak using Eq. (9). Create a new estimation of
ε
-
; by backward projecting the values of
a
k
[Eq. (10)]. The backward projection is again performed without TOF. 4. Go back to step 1. Repeat R times. After R iterations of this algorithm, we end up with an estimator of the sensitivity
ε
-
; (to the accuracy of multiplicative constant). Then, with the new sensitivity estimate, we can continue with the image reconstruction, performing new iterations for obtaining a better estimation of
C
i
;'s using the origin ensemble (OE) algorithm.30 After that, another R iterations of the proposed algorithm for sensitivity estimation can be performed. Therefore, the proposed method consists on alternating reconstructions of the image
C
i
; for a fixed sensitivity
ε
-
; and vice versa. Note that steps 2 and 3 can be performed together [using Eq. (10)], and no explicit computation of
a
k
is actually needed with this algorithm"); and
obtaining the (Woo page 3 paragraph 2 "The generated Michelogram data were then directly modified according to the POLARIS motion data [9]. Three dimensional location of the two detectors can be expressed as follows for each LOR,
D
→
A
and
D
→
B
[...] where
D
A
and
D
B
are detector numbers in the acquired event data, s is the distance between the LOR and the center of the tomograph, R is the inner radius of the tomograph (27.7 cm),
∅
is sthe azimuth of the LOR").
However, the combination of Hu, Shanbhag, Woo, and Herraiz does not teach position information includes “barycentric coordinate”.
Miyajima teaches “the position information to be evaluated includes a barycentric coordinate (Miyajima paragraph [0017] "The location determination unit is characterized in that, based on the characteristics of the plurality of markers, the centroid coordinates of each of the plurality of markers included in the region of interest are determined as the respective positions of the plurality of markers in the low-resolution image")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to modify a method for motion correction and parametric image reconstruction of a PET image and quality evaluation of the motion correction as taught by Hu, Shanbhag, Woo, and Herraiz to include barycentric coordinates for position information as taught by Miyajima. Such a modification is a result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, using barycentric coordinates for position information for correcting motion correction provides a more detailed position information. This known benefit is applicable to motion correction as accurate information on position will translate to a more accurate corrected image without motion. Therefore, it would have been recognized that modifying the motion correction and image reconstruction of PET image to include position information using barycentric coordinates of a target region would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate coordinate system for position information, such as barycentric in a medical imaging scanning environment and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Therefore, it would have been obvious to combine the disclosure of Hu, Shanbhag, Woo, and Herraiz with the Miyajima disclosure to obtain the invention as specified in claim 10 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Regarding claim 35, the combination of Hu, Shanbhag, Woo, Herraiz, and Miyajima teaches “The method of claim 32, wherein the generating a quality evaluation result of the motion information according to a preset quality evaluation index and the position information to be evaluated within a preset sampling time period includes:
determining a distribution stability of the position information to be evaluated according to an occurrence order of a plurality of sampling time points (Shanbhag paragraph [0081] "lower value of the LDM is generally indicative of better alignment of the signal data at different time points in the given ROI, while a higher value of the LDM is generally indicative of poor alignment of the signal data at different time points in the given ROI"); and
determining the quality evaluation result of the motion information according to a deviation between the distribution stability of the position information to be evaluated and a preset evaluation threshold (Shanbhag paragraph [0081] "lower value of the LDM is generally indicative of better alignment of the signal data at different time points in the given ROI, while a higher value of the LDM is generally indicative of poor alignment of the signal data at different time points in the given ROI") reflect the quality evaluation result (Shanbhag paragraph [0082] "At step 410, if it is verified that the LDM value is lower than the threshold dispersion value, then it may be inferred that the motion correction is of "good" quality. However, at step 410, if it is determined that the LDM value is greater than the threshold dispersion value, then it may be deduced that the quality of motion correction is "poor."").”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASPREET KAUR/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662