DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Claims 1, 10, and 19 have been amended. Claims 2, 4, 7, and 11 have been canceled previously. Claims 1, 3, 5, 6, 8-10, and 12-24 are pending in this action.
Applicant’s arguments, see pg. 7-8 section “35 U.S.C. 112 Specification”, filed 17 November 2025, with respect to the rejection of claims 13 and 19-20 under 35 U.S.C. 112(a) have been fully considered and are persuasive. Specifically, regarding claim 13 the applicant argues that the claim does not require that each time-specific reconstructed PET image is comprised of more than one parametric image. The examiner agrees. Regarding claim 19, the applicant amended the claims to avoid the subject matter which the examiner stated was not supported by the specification in the action mailed on 26 September 2025. Claim 20 is dependent on claim 19 and was likewise corrected. The rejection of claims 13 and 19-20 under 35 U.S.C. 112(a) has been withdrawn.
Applicant’s arguments, see pg. 8 section “35 U.S.C. 112(b)”, filed 17 November 2025, with respect to the rejection of claims 19-20 under 35 U.S.C. 112(b) have been fully considered and are persuasive. Specifically, the applicant has amended claim 19 upon which claim 20 depends to clarify the matter which the examiner had previously argued was unclear. Therefore, the rejection of claims 19-20 under 35 U.S.C. 112(b) has been withdrawn.
Applicant’s arguments, see pg. 8-10 section “35 U.S.C. 103 Obviousness”, filed 17 November 2025, with respect to the rejections of claims 1, 3, 5, 6, 8-10, 12-18 and 21-24 under 35 U.S.C. 103 have been fully considered and are persuasive. Specifically, the applicant argues that Xi et al. (US 20240242398 A1; hereafter Xi) in view of Whiteley et al. (US 20210104079 A1; hereafter, Whiteley) in further view of Shah et al. (US 20220015721 A1) fails to disclose to generate a plurality of time-specific reconstructed PET images “at one or more time periods not defined by the position-time coordinate pairs.” The examiner agrees. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Hou et al. ("Flexible Conditional Image Generation of Missing Data with Learned Mental Maps", full reference on PTO-892 included with this action; hereafter, Hou).
Hou discloses:
and at one or more time periods not defined by the position-time coordinate pairs (this limitation is interpreted as being a generation of image slices between existing slices in the volume. pg. 2 para. 4, missing slices, i.e. slices between already acquired slices, are generated by a machine learning model).
Claim 10 has been similarly amended and is similarly rejected. Therefore, claims 1, 3, 5, 6, 8-10, 12-18 and 21-24 are rejected under 35 U.S.C. 103. The complete rejection including motivations to combine is included in the section “Claim Rejections - 35 USC § 103” below.
Applicant’s arguments, see pg. 10 section “35 U.S.C. 103 Obviousness”, filed 17 November 2025, with respect to the rejections of claims 19-20 under 35 U.S.C. 103 have been fully considered and are not persuasive. The applicant argues that claim 19 has been similarly amended to claim 1 and therefore the arguments regarding claim 1 apply to claim 19. The examiner disagrees. The amendment to claim 1 defines a further output of the trained neural network. The amendment to claim 19 defines a further input to the training of a neural network. Therefore, the claims and the amendments of the claims cover a different scope and the arguments for claim 1 do not apply to claim 19. Further, Xi discloses:
wherein the plurality of time-specific reconstructed PET image comprises a first subset and a second subset ([0079] the training dataset may be one of two embodiments), wherein each image of the first subset of the time-specific reconstructed PET images corresponds to one of the time-referenced histo-image frames generated from portions of the dynamic PET data having corresponding positions ([0079] the training dataset is PET images. [0059] reconstruction data may be histo-image frames. [0062] the reconstruction data is used to generate a PET reconstruction image which [0063] may be a three-dimensional image. It is well known in the art that a three-dimensional image is comprised of slices, i.e. axial positions. Therefore, the training samples must also occur at slices, i.e. axial positions) within the position-time coordinate pairs (As taught above by [0045] and [0063], it is understood that specific positions are associated with specific times, i.e. position-time coordinate pairs) and each image of the second subset of the time-specific reconstructed PET image corresponds to a time not included in the time-references of the time-reference histo-image frames (the examiner understands "a time not included in the time-references of the time-reference histo-image frames" to include a subset of reconstructed PET images acquired during a separate scan than the scan(s) which were used to acquire the data for the time-reference histo-image frames. [0079] training samples and labels, i.e. ground truth, may be obtained based on historical scanning data, i.e. a separate scan at a separate time. The examiner recognizes that per the language in Xi [0079] it is unlikely for both the subsets of data to exist at once as they are described as "some embodiments". However, in reviewing the remainder of the claim, the examiner finds that the second subset of images is not utilized again because each instance of the time-specific reconstructed PET images is associated with a specific axial position corresponding to a position-time coordinate pair which may only correspond to the first subset of data. This may only correspond with the first subset of images because if the second subset of images does not occur at a time included in the time-references of the histo-image frames, the second subset of images cannot occur at an axial position defined by those same time references in the position-time coordinate pairs. Therefore, although the claim lists the first subset of images and the second subset of images with an inclusive "and", the examiner understands an alternative case, as taught by Xi [0079], to read on the claim as the remainder of the claim makes clear that only one of the subset of images is used);
Therefore, claims 19-20 remain rejected under 35 U.S.C. 103.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 3, 5, 6, 8-10, 12-18 and 21-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, because the specification, while being enabling for generating time-specific reconstructed PET images, does not reasonably provide enablement for all types of time-specific reconstructed PET images at all time periods known. The specification does not enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to practice the invention commensurate in scope with these claims.
Specifically, claim 1 states to generate time-specific reconstructed PET images “at time periods defined by the position-time coordinate pairs and at one or more time periods not defined by the position-time coordinate pairs.” Therefore, the claim is claiming all time periods defined by the position-time coordinate pairs and all time periods not defined at position-time coordinate pairs which is all time periods known. Therefore, the claim is claiming to generate time-specific reconstructed PET images at all time periods known. Claim 10 has been similarly amended to include similar claim language. For the purpose of examination, the examiner interprets the claim to be claiming to generate time-specific reconstructed PET images at axial positions, i.e. slices, between existing specific axial position which are associated with a hypothetical time of capture, i.e. a time period not defined by the position-time coordinate pairs which are associated with the specific axial positions. As this interpretation is specifically for time periods or slices between the position-time coordinate pairs, the time periods may not exceed the bounds of the position-time coordinate pairs (i.e. the time periods may not be before the earliest position-time coordinate pair or after the latest position-time coordinate pair). This limits the scope so that it may be enabled by the specification.
In Amgen Inc. et al. v. Sanofi et al., 598 U.S. 594, 2023 USPQ2d 602 (2023), the Supreme Court, held that claims drawn to a genus of monoclonal antibodies, which were functionally claimed by their ability to bind to a specific protein, PCSK9, were invalid due to lack of enablement. The claims at issue were functional, in that they defined the genus by its function (the ability to bind to specific residues of PCSK9) as opposed to reciting a specific structure (the amino acid sequence of the antibodies in the genus). The Supreme Court concluded that the patents at issue failed to adequately enable the full scope of the genus of antibodies that performed the function of binding to specific amino acid residues on PCSK9 and blocking the binding of PCSK9 to a particular cholesterol receptor, LDLR.
The Court clarified that the specification does not always need to "describe with particularity how to make and use every single embodiment within a claimed class." Id. at 610-11. However, "[i]f a patent claims an entire class of processes, machines, manufactures, or compositions of matter, the patent’s specification must enable a person skilled in the art to make and use the entire class….The more one claims, the more one must enable." Id.
Therefore claims 1 and 10 are rejected. Claims 3, 5-6, 8-9, 21, and 22 are dependent on claim 1 and are likewise rejected. Claims 12-18, 23, and 24 are dependent on claim 10 and are likewise rejected.
Claim Rejections - 35 USC § 112(a) – New Matter
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 19-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The amendments to the claims have caused these claims to contain new subject matter.
Regarding claim 19, the newly amended claim states that the training dataset is comprised of a first and a second subset of images. The examiner has reviewed the specification and has not found support for the amended claim language. The applicant states in their remarks that [0067] provides support for the amendments. However, [0067] describes an additional output of a train neural network wherein the output is histo-image frames at times other than those included in the first subset of histo-image frames. This is an output of the trained neural network and does not describe the input into training the model, i.e. “a training dataset”. Therefore, [0067] does not support a second subset of input images for training a neural network. For the purpose of examination the examiner interprets the claim to claim alternative training datasets, a first subset with time references corresponding with the position-time coordinate pairs and a second subset comprising data acquired at a time-period outside of the time references which correspond to the position-time coordinate pairs. Claim 20 is dependent on claim 19 and is similarly rejected.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3, 5, 6, 8-10, 12-18 and 21-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, by the grammar of the sentence, the amended language is understood as: "generate a plurality of estimated time-specific reconstructed PET images using the trained neural network by converting time-referenced histo-image frames into time- specific reconstructed PET images corresponding to the one of the plurality of specific axial positions of the PET imaging modality . . . at one or more time periods not defined by the position- time coordinate pairs." However, the claim states previously "the plurality of specific axial positions is determined by comparing corresponding time-references in the dynamic PET data with time information of the position-time coordinate pairs;" Since the plurality of specific axial positions are defined as being determined by comparing time-references with position-time coordinate pairs, it is unclear how the reconstructed images may correspond to specific axial positions at one or more time periods not defined by the position-time coordinate pairs. In other words, by the definition in the claim of the plurality of specific axial positions, there cannot be any of the plurality of specific axial positions which are defined without the position-time coordinate pairs. For the purpose of examination, the examiner interprets the plurality of time-specific reconstructed PET images at one or more time periods not defined by the position-time coordinate pairs to not correspond to the one of the plurality of specific axial position of the PET imaging modality. Claims 3, 5-6, 8-9, 21, and 22 are dependent on claim 1 and are likewise rejected.
Claim 10 has been similarly amended to claim 1 and is likewise rejected and interpreted. Claims 12-18, 23, and 24 are dependent on claim 10 and are likewise rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5, 6, 8-10, 12-18, and 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over Xi et al. (US 20240242398 A1; hereafter Xi) in view of Whiteley et al. (US 20210104079 A1; hereafter, Whiteley) in further view of Shah et al. (US 20220015721 A1) and of Hou et al. ("Flexible Conditional Image Generation of Missing Data with Learned Mental Maps", full reference on PTO-892 included with this action; hereafter, Hou).
Regarding claim 1, Xi discloses:
A system, comprising: a positron emission tomography (PET) imaging modality configured to execute a first scan to acquire a first PET dataset ([0036] The imaging device 110 of FIG. 1 includes a PET imaging device. [0044] PET data acquired by using a PET scanner), wherein the first PET dataset includes dynamic PET data ([0045] PET data may be dynamic PET data) having corresponding time-references ([0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced);
and a processor ([0037] A processing device 120 for reconstructing PET images) configured to:
to generate a plurality of time-referenced ([0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced) histo-image frames ([0059] the reconstruction data may include target PET data having histo-image format), wherein each of the time-referenced histo-image frames corresponds to one of a plurality of specific axial positions ([0059] reconstruction data may be histo-image frames. [0062] the reconstruction data is used to generate a PET reconstruction image which [0063] may be a three-dimensional image. It is well known in the art that a three-dimensional image is comprised of slices, i.e. axial positions. Therefore, the histo-image frames which are used to reconstruct the three-dimensional image must also occur at slices, i.e. axial positions);
input each of the plurality of the time-referenced histo-image frames to a trained neural network (Fig. 5, the target PET data, i.e. time-reference histo-image frames, are input into a neural network, i.e. "second deep learning model". [0097] a plurality of sets of target PET data may be obtained and input, understood as each of the plurality of time-referenced histo-image frames);
and generate a plurality of estimated time-specific reconstructed PET images using the trained neural network (Fig. 5 and [0100], reconstructed images are generated from the input by a neural network. [0097] as each set of target data, histo-image frame, corresponds to a time point or time period it is understood that the output image generated from that data would likewise correspond to a time point or time period, i.e. time-specific reconstructed PET images) by converting time-referenced histo-image frames into time-specific reconstructed PET images ([0094] The input data may by histo-images. [0097] The data corresponds to a plurality of time points, i.e. it is time-referenced. [0100] The PET reconstructed images are generated by a deep learning model) corresponding to the one of the plurality of specific axial position of the PET imaging modality ([0063] The PET reconstructed image may be a three-dimensional image. It is well known in the art that a three-dimensional image may be generated as a stack of two-dimensional images, each two-dimensional image at a specific axial position. [0097] the data is time-referenced, as shown above) at time periods defined by the position-time coordinate pairs ([0063] as shown above, each slice is associate with a time which is understood as the position-time pair)
Xi does not disclose expressly back-project the first PET dataset to generate a plurality of histo-image frames.
Whiteley discloses:
back-project the first PET dataset to generate histo-image frames ([0018] back-projection algorithm applied to raw data to form histo-images)
Xi and Whiteley are combinable because they are from the same field of endeavor of reconstructing PET images through neural networks (Xi, [0072]; Whiteley, Fig. 1 and [0015]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to implement the back-projecting of Whiteley with the invention of Xi.
The motivation for doing so would have been "the large compression ratio between the TOF data and the histo-image" (Whiteley, [0003]). Further, back-projecting to obtain histo-image frames is well known in the art (Whiteley, [0003] back-projecting in the background section, [0018] back-projection according to a known method published in 1982).
Therefore, it would have been obvious to combine Whiteley with Xi.
Xi in view of Whiteley does not disclose expressly position-time coordinate pairs and wherein the one of the plurality of specific axial positions is determined by comparing corresponding time-references with information of the position-time coordinate pairs.
Shah discloses:
position-time coordinate pairs that track a position of a subject volume and time information during the first scan (the examiner understands position-time coordinate pairs as an association between a position, which is understood as slice of a 3D image volume, and a timepoint, which is understood as the time of acquisition of the slice, see the applicant’s specification paragraph [0041]. [0034] "As mentioned above, the PET data may be reconstructed into three-dimensional slices. Since the acquired PET data is associated with a particular time at which the PET data was acquired, each image slice reconstructed therefrom may also be associated with an acquisition time." As each slice is associated with an acquisition time, it is understood as a position-time coordinate pair);
and wherein the one of the plurality of specific axial positions is determined by comparing corresponding time-references in the dynamic PET data with time information of the position-time coordinate pairs ([0037] a slice is determined based on the associated acquisition time. Therefore, one of the plurality of slices, i.e. specific axial positions, is determined by comparing the time associated with the activity with slice associated with that same time);
Shah is combinable with Xi in view of Whiteley because it is in the related field of endeavor of determining metabolic rate based on a PET scan (Shah, [0014]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to use the position-time coordinate pairs and implement the frame selection of Shah with the invention of Xi in view of Whiteley.
The motivation for doing so would have been to locate a region of interest (Shah, [0040]) to assist in diagnosis (Shah, [0045]).
Therefore, it would have been obvious to combine Shah with Xi in view of Whiteley.
Xi in view of Whiteley in further view of Shah does not disclose expressly to generate PET reconstructed image at one or more time periods not defined by the position-time coordinate pairs.
Hou discloses:
and at one or more time periods not defined by the position-time coordinate pairs (as noted in the rejection under 35 U.S.C. 112(a), this limitation is interpreted as being a generation of image slices between existing slices in the volume. pg. 2 para. 4, missing slices, i.e. slices between already acquired slices, are generated by a machine learning model).
Hou is combinable with Xi in view of Whiteley in further view of Shah because it is from the related field of endeavor of spare reconstruction of medical images (Hou, pg. 1 para. 1).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the image generation between slices of the volume (i.e. at time periods not defined by the position-time coordinate pairs) of Hou with the invention of Xi in view of Whiteley in further view of Shah.
The motivation for doing so would have been to "produce probabilistic mental representations of the anatomy and anatomical context in question to aid diagnosis and therapy" (Hou, pg. 2 para. 2).
Therefore, it would have been obvious to combine Hou with Xi in view of Whiteley in further view of Shah to obtain the invention as specified in claim 1.
Regarding claim 3, Xi in view of Whiteley in further view of Shah and of Hou discloses the subject matter of claim 1.
Xi further discloses:
wherein the estimated time-specific reconstructed PET images comprise a set of estimated PET parameters (The examiner interprets the claim as stating that time-specific reconstructed PET images comprise PET parameters. [0050] correction data may be determined for the PET data, [0052] correction data weighting is dependent on environmental parameters. Therefore, the PET data includes record of environmental parameters. [0053] environmental parameters include parameters relating to the PET scanning device, the tracer and the scanned object. These are understood as PET parameters) based on each input time-referenced histo-image frame ([0141] and [0144], the input may be time-referenced histo-image frames. See [136]-[137] for disclosing time-referenced histo-image frames as shown in claim 1).
Regarding claim 5, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 1.
Xi further discloses:
wherein the trained neural network is a trained convolutional neural network ([0100] The deep learning model may be a convolutional neural network (CNN)).
Regarding claim 6, Xi in view of Whiteley in further view of Shah and of Hou discloses the subject matter of claim 1.
Xi further discloses:
wherein the first PET dataset comprises a list-mode dataset ([0046] The PET data may be in list format), a plurality of time-of-flight sinograms ([0046] The PET data may be in time of flight sinogram format),
Xi does not disclose expressly that the PET data may be a combination of list mode and sinogram.
Whiteley discloses:
or a combination thereof (support for this amendment is found through the applicant's specification by the wording "TOF sinograms and/or TOF list-mode data", such as in [0039], which may be understood as "a combination thereof". [0058] "PET data 934, which may comprise list-mode data and/or sinograms")
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the combined data of Whiteley with the invention of Xi.
The motivation for doing so would have been that it is well known in the art to reconstruct PET images from sinogram and/or list-mode data. Further, a person of ordinary skill in the art may generate sinogram data from list-mode data as a conversion (see Shah, [0045]). Therefore, it would be obvious to generate PET data from either or both of list-mode and sinogram data.
Therefore, it would have been obvious to combine Whiteley with Xi to obtain the invention as specified in claim 6.
Regarding claim 8, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 1.
Xi further discloses:
wherein the trained neural network is generated ([0101] training is done by training data with labels, i.e. ground truth. [0043] original PET data may be used to determine correction data. Original PET data travels through different forms, namely correction PET data, target PET data, and original PET data. Therefore, the first form of the data is original PET data which is described as having the forms described in the rest of this claim and may reasonably be used in training) by a training dataset comprising a training input selected from the group consisting of:
PET list-mode data ([0046] The PET data may be in list format), time-referenced ([0045] data may be a plurality of sets corresponding to time periods) time-of-flight sinograms ( [0046] the PET data may be in sinogram format with time of flight coordinates), and time-referenced histo-image frames ([0059] the data may be in histo-image format. Further teaching of time-reference histo-image frames is in claim 1).
Regarding claim 9, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 8.
Xi further discloses:
wherein the training dataset comprises a ground truth output image ([0101] the label may be a ground truth PET reconstruction image).
Regarding claim 10, Xi discloses:
A method of dynamic imaging for a positron emission tomography (PET) imaging device (Fig. 2 showing a method of reconstructing PET images), comprising:
executing a first scan to acquire a first PET dataset ([0036] The imaging device 110 of FIG. 1 includes a PET imaging device. [0044] PET data acquired by using a PET scanner), wherein the first PET dataset includes dynamic PET data ([0045] PET data may be dynamic PET data) having corresponding time-references ([0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced);
generate a plurality of time-referenced ([0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced) histo-image frames ([0059] the reconstruction data may include target PET data having histo-image format), wherein each of the time-referenced histo-image frames corresponds to one of the plurality of specific axial positions ([0059] reconstruction data may be histo-image frames. [0062] the reconstruction data is used to generate a PET reconstruction image which [0063] may be a three-dimensional image. It is well known in the art that a three-dimensional image is comprised of slices, i.e. axial positions. Therefore, the histo-image frames which are used to reconstruct the three-dimensional image must also occur at slices, i.e. axial positions) at a corresponding interval of the predetermined time period ([0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced. As each histo-image frame is associated with an axial position, [0063] as applied above, and each frame is collected in a period of time, the axial position and period of time can be said to correspond);
inputting each of the plurality of the time-referenced histo-image frames to a trained neural network (Fig. 5, the target PET data, i.e. time-reference histo-image frames, are input into a neural network, i.e. "second deep learning model". [0097] a plurality of sets of target PET data may be obtained and input, understood as each of the plurality of time-referenced histo-image frames);
and generating a plurality of estimated time-specific reconstructed PET images using the trained neural network (Fig. 5 and [0100], reconstructed images are generated from the input by a neural network. [0097] as each set of target data, histo-image frame, corresponds to a time point or time period it is understood that the output image generated from that data would likewise correspond to a time point or time period, i.e. time-specific reconstructed PET images) by converting individual time-referenced histo-image frames into time-specific reconstructed PET images ([0094] The input data may by histo-images. [0097] The data corresponds to a plurality of time points, i.e. it is time-referenced. [0100] The PET reconstructed images are generated by a deep learning model) corresponding to the one of the plurality of specific axial position of the PET imaging modality ([0063] The PET reconstructed image may be a three-dimensional image. It is well known in the art that a three-dimensional image may be generated as a stack of two-dimensional images, each two-dimensional image at a specific axial position. [0097] the data is time-referenced, as shown above) at corresponding time periods defined by the position-time coordinate pairs ([0063] as shown above, each slice is associate with a time which is understood as the position-time pair)
Xi does not disclose expressly back-projecting the first PET dataset to generate a plurality of histo-image frames.
Whiteley discloses:
back-projecting the first PET dataset to generate histo-image frames ([0018] back-projection algorithm applied to raw data to form histo-images)
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to implement the back-projecting of Whiteley with the invention of Xi.
The motivation for doing so would have been "the large compression ratio between the TOF data and the histo-image" (Whiteley, [0003]). Further, back-projecting to obtain histo-image frames is well known in the art (Whiteley, [0003] back-projecting in the background section, [0018] back-projection according to a known method published in 1982).
Therefore, it would have been obvious to combine Whiteley with Xi.
Xi in view of Whiteley does not disclose expressly position-time coordinate pairs and wherein the one of the plurality of specific axial positions is determined by comparing corresponding time-references with information of the position-time coordinate pairs.
Shah discloses:
position-time coordinate pairs that track a position of a subject volume and time information during the first scan (the examiner understands position-time coordinate pairs as an association between a position, which is understood as slice of a 3D image volume, and a timepoint, which is understood as the time of acquisition of the slice, see the applicant’s specification paragraph [0041]. [0034] "As mentioned above, the PET data may be reconstructed into three-dimensional slices. Since the acquired PET data is associated with a particular time at which the PET data was acquired, each image slice reconstructed therefrom may also be associated with an acquisition time." As each slice is associated with an acquisition time, it is understood as a position-time coordinate pair);
and wherein the one of the plurality of specific axial positions is determined by comparing corresponding time-references in the dynamic PET data with time information of the position-time coordinate pairs ([0037] a slice is determined based on the associated acquisition time. Therefore, one of the plurality of slices, i.e. specific axial positions, is determined by comparing the time associated with the activity with slice associated with that same time);
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to use the position-time coordinate pairs and to implement the frame selection of Shah with the invention of Xi in view of Whiteley.
The motivation for doing so would have been to locate a region of interest (Shah, [0040]) to assist in diagnosis (Shah, [0045]).
Therefore, it would have been obvious to combine Shah with Xi in view of Whiteley.
Xi in view of Whiteley in further view of Shah does not disclose expressly to generate PET reconstructed image at one or more time periods not defined by the position-time coordinate pairs.
Hou discloses:
and at one or more time periods not defined by the position-time coordinate pairs (as noted in the rejection under 35 U.S.C. 112(a), this limitation is interpreted as being a generation of image slices between existing slices in the volume. pg. 2 para. 4, missing slices, i.e. slices between already acquired slices, are generated by a machine learning model).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the image generation between slices of the volume (i.e. at time periods not defined by the position-time coordinate pairs) of Hou with the invention of Xi in view of Whiteley in further view of Shah.
The motivation for doing so would have been to "produce probabilistic mental representations of the anatomy and anatomical context in question to aid diagnosis and therapy" (Hou, pg. 2 para. 2).
Therefore, it would have been obvious to combine Hou with Xi in view of Whiteley in further view of Shah to obtain the invention as specified in claim 10.
Regarding claim 12, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi further discloses:
Wherein each of the estimated time-specific reconstructed PET images comprise a set of estimated PET parameters ([0050] correction data may be determined for the PET data, [0052] correction data weighting is dependent on environmental parameters. Therefore, the PET data includes record of environmental parameters. [0053] environmental parameters include parameters relating to the PET scanning device, the tracer, and the scanned object. These are understood as PET parameters) based on each input time-referenced histo-image frame ([0059] the data may be histo-image frames. [0043] the data is time-referenced).
Regarding claim 13, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi in the current embodiment (hereafter, embodiment A) does not disclose expressly that a time-specific reconstructed PET image comprises plural images.
Xi in another embodiment (hereafter, embodiment C) discloses:
wherein the estimated time-specific reconstructed PET images comprise a plurality of parametric reconstructed images (the examiner interprets the claim to read a plurality of reconstructed PET images per the rejection of claim 13 under 35 U.S.C. 112(a). [0128] a plurality of images may be fused together to form a single image).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention to combine the image fusing of embodiment C with the invention of Xi embodiment A.
The motivation for doing so would have been "to generate a final PET reconstruction image, which can improve the accuracy of reconstruction and improve the quality of the resulting PET reconstruction image" (Xi, [0134]).
Therefore, it would have been obvious to combine Xi embodiment C with Xi embodiment A to obtain the invention as specified in claim 13.
Regarding claim 14, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi further discloses:
wherein the trained neural network is a trained convolutional neural network ([0100] The deep learning model may be a convolutional neural network (CNN)).
Regarding claim 15, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi further discloses:
wherein the first PET dataset comprises a list-mode dataset ([0046] The PET data may be in list format).
Regarding claim 16, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi further discloses:
wherein the first PET dataset comprises a plurality of time-of-flight sinograms ([0045] data may be a plurality of sets corresponding to time periods. [0046] the PET data may be in sinogram format with time of flight coordinates. If the original data is a plurality of sets corresponding to time periods then the conversion of the data into sinograms would include a plurality of sets corresponding to time periods. Therefore, a plurality of time-of-light sinograms).
Regarding claim 17, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi further discloses:
wherein the trained neural network is generated ([0101] training is done by training data with labels, i.e. ground truth. [0043] original PET data may be used to determine correction data. Original PET data travels through different forms, namely correction PET data, target PET data, and original PET data. Therefore, the first form of the data is original PET data which is described as having the forms described in the rest of this claim and may reasonably be used in training) by a training dataset comprising a training input selected from the group consisting of:
PET list-mode data ([0046] The PET data may be in list format), time-referenced ([0045] data may be a plurality of sets corresponding to time periods) time-of-flight sinograms ([0046] the PET data may be in sinogram format with time of flight coordinates), and time-referenced histo-image frames ([0059] the data may be in histo-image format. Further teaching of time-reference histo-image frames is in claim 10).
Regarding claim 18, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 17.
Xi further discloses:
wherein the training dataset comprises a ground truth output image ([0101] the label may be a ground truth PET reconstruction image).
Regarding claim 21, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 1.
Xi further discloses:
wherein each of the plurality of estimated time-specific reconstructed PET images comprise: an estimated output image based on each input time-referenced histo-image frame ([0094] The input data may by histo-images. [0100] The PET reconstructed image is generated based on the histo-images) having corresponding position in the position-time coordinate pairs ([0063] "In some embodiments, the PET reconstruction image may be a two-dimensional image or a three-dimensional image. In some embodiments, the PET reconstruction image may include one or more static PET reconstruction images, and each of the one or more static PET reconstruction images may correspond to a single time point or time period." The examiner interprets that when a PET reconstruction image includes more than one static PET reconstruction image that it is a 3D image with each static image representing a slice or position. As each static image corresponds to a time point, the examiner understands that each static image represents a pair between the position in 3D space of that image and the time it was acquired, i.e. a position-time pair);
an estimated parametric image based on each input time-referenced histo-image frame ([0144] A PET parametric image is generated based on histo-image frames) having corresponding position in the position-time coordinate pairs ([0063] as taught above, the frame corresponds with a position-time pair);
a set of estimated dynamic PET parameters based on each input time-referenced histo-image frame ([0140]-[0141] Dynamic PET parameters are generated based on histo-image frames) having corresponding position in the position-time coordinate pairs ([0063] as taught above, the frame corresponds with a position-time pair).
Regarding claim 22, Xi in view of Whiteley in further view of Shah discloses the subject matter of claim 21. Xi further discloses:
the estimated output image ([0100] the model generates an estimated output image. [0101] training is done with ground truth images and performed by training module 1440), the estimated parametric image ([0144] the neural network is trained based on data input into the neural network and a corresponding ground truth PET parametric image. The model may be trained using training module 1440) and/or the estimated dynamic PET parameters ([0140] the parametric data includes kinetic, i.e. dynamic, parameters. [0141] an output of the pharmacokinetic model may include kinetic parameters. [0144] the output includes a parametric image understood to include kinetic parameters. The model is trained with a corresponding ground truth image. As it is “corresponding” and the output includes kinetic parameters, the ground truth is understood to include kinetic parameters. The training is completed by training module 1440), and a ground truth value (As applied above, the training includes the ground truth data, see [0101] and [0144]).
Xi in the embodiment disclosed above (hereafter, embodiment A) does not disclose expressly that the neural network is trained based on differences.
Xi in another embodiment (hereafter, embodiment D) discloses:
wherein the trained neural network is trained based on differences between one or more of the inputs and the ground truth ([0089] an example of training with module 1440 includes [0090] determining the difference between the ground truth and the generated output by a loss function and training the model based on those differences. A person or ordinary skill in the art would understand that a training module may train the model based on one or more inputs)
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention to include the updating of the neural network based on differences of Xi embodiment D with the invention of Xi embodiment A.
The motivation for doing so would have been "Since the first deep learning model learns the optimal mechanism for PET image reconstruction based on a large amount of data during the training process, the reconstruction of the PET image generated by the first deep learning model may have high accuracy” (Xi, [0092]).
Therefore, it would have been obvious to combine Xi embodiment D with Xi embodiment A to obtain the invention as specified in claim 22.
Regarding claim 23, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 10.
Xi further discloses:
wherein the estimated time-specific reconstructed PET images comprise: an estimated output image based on each input time-referenced histo-image frame ([0094] The input data may by histo-images. [0100] The PET reconstructed image is generated based on the histo-images);
an estimated parametric image based on each input time-referenced histo-image frame ([0144] A PET parametric image is generated based on histo-image frames);
a set of estimated dynamic PET parameters based on each input time-referenced histo-image frame ([0140]-[0141] Dynamic PET parameters are generated based on histo-image frames).
Regarding claim 24, Xi in view of Whiteley in further view of Shah and Hou discloses the subject matter of claim 23.
Xi further discloses:
the estimated output image ([0100] the model generates an estimated output image. [0101] training is done with ground truth images and performed by training module 1440), the estimated parametric image ([0144] the neural network is trained based on data input into the neural network and a corresponding ground truth PET parametric image. The model may be trained using training module 1440) and/or the estimated dynamic PET parameters ([0140] the parametric data includes kinetic, i.e. dynamic, parameters. [0141] an output of the pharmacokinetic model may include kinetic parameters. [0144] the output includes a parametric image understood to include kinetic parameters. The model is trained with a corresponding ground truth image. As it is “corresponding” and the output include kinetic parameters, the ground truth is understood to include kinetic parameters. The training is completed by training module 1440), and a ground truth value (As applied above, the training includes the ground truth data, see [0101] and [0144]).
Xi in the embodiment disclosed above (hereafter, embodiment A) does not disclose expressly that the neural network is trained based on differences.
Xi in another embodiment (hereafter, embodiment D) discloses:
wherein the trained neural network is trained based on differences between the one or more inputs and the ground truth ([0089] an example of training with module 1440 includes [0090] determining the difference between the ground truth and the generated output by a loss function and training the model based on those differences. A person or ordinary skill in the art would understand that a training module may train the model based on one or more inputs)
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention to include the updating of the neural network based on differences of Xi embodiment D with the invention of Xi embodiment A.
The motivation for doing so would have been "Since the first deep learning model learns the optimal mechanism for PET image reconstruction based on a large amount of data during the training process, the reconstruction of the PET image generated by the first deep learning model may have high accuracy” (Xi, [0092]).
Therefore, it would have been obvious to combine Xi embodiment D with Xi embodiment A to obtain the invention as specified in claim 24.
Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Xi et al. (US 20240242398 A1; hereafter, Xi).
Regarding claim 19, Xi discloses:
A method of training a neural network for use in dynamic positron emission tomography (PET) imaging ([0089] the deep learning model, i.e. neural network, is trained by the following method), comprising:
receiving a training dataset comprising a plurality of time-referenced histo-image frames ([0089] a training dataset is target PET data. [0059] target PET data is in histo-image format. [0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced) and a plurality of reconstructed PET images ([0089] the labels of the training dataset include a ground truth PET reconstruction image) wherein each of the time-referenced histo-image frames is generated from dynamic PET data have corresponding time-references ([0045] "a plurality of sets of data corresponding to several time points or time periods may be collected, and a set of scanning data collected in each time period may be called a set (or frame) of original data." Therefore, each frame is associated with a time-period, i.e. time-referenced) and position-time coordinate pairs that track a position of a subject volume and time information for a PET imaging modality ([0063] "In some embodiments, the PET reconstruction image may be a two-dimensional image or a three-dimensional image. In some embodiments, the PET reconstruction image may include one or more static PET reconstruction images, and each of the one or more static PET reconstruction images may correspond to a single time point or time period." The examiner interprets that when a PET reconstruction image includes more than one static PET reconstruction image that it is a 3D image with each static image representing a slice or position. As each static image corresponds to a time point, the examiner understands that each static image represents a pair between the position in 3D space of that image and the time it was acquired, i.e. a position-time pair),
wherein the plurality of time-specific reconstructed PET image comprises a first subset and a second subset ([0079] the training dataset may be one of two embodiments), wherein each image of the first subset of the time-specific reconstructed PET images corresponds to one of the time-referenced histo-image frames generated from portions of the dynamic PET data having corresponding positions ([0079] the training dataset is PET images. [0059] reconstruction data may be histo-image frames. [0062] the reconstruction data is used to generate a PET reconstruction image which [0063] may be a three-dimensional image. It is well known in the art that a three-dimensional image is comprised of slices, i.e. axial positions. Therefore, the training samples must also occur at slices, i.e. axial positions) within the position-time coordinate pairs (As taught above by [0045] and [0063], it is understood that specific positions are associated with specific times, i.e. position-time coordinate pairs) and each image of the second subset of the time-specific reconstructed PET image corresponds to a time not included in the time-references of the time-reference histo-image frames (the examiner understands "a time not included in the time-references of the time-reference histo-image frames" to include a subset of reconstructed PET images acquired during a separate scan than the scan(s) which were used to acquire the data for the time-reference histo-image frames. [0079] training samples and labels, i.e. ground truth, may be obtained based on historical scanning data, i.e. a separate scan at a separate time. The examiner recognizes that per the language in Xi [0079] it is unlikely for both the subsets of data to exist at once as they are described as "some embodiments". However, in reviewing the remainder of the claim, the examiner finds that the second subset of images is not utilized again because each instance of the time-specific reconstructed PET images is associated with a specific axial position corresponding to a position-time coordinate pair which may only correspond to the first subset of data. This may only correspond with the first subset of images because if the second subset of images does not occur at a time included in the time-references of the histo-image frames, the second subset of images cannot occur at an axial position defined by those same time references in the position-time coordinate pairs. Therefore, although the claim lists the first subset of images and the second subset of images with an inclusive "and", the examiner understands an alternative case, as taught by Xi [0079], to read on the claim as the remainder of the claim makes clear that only one of the subset of images is used);
to determine any differences between each estimated time-specific reconstructed PET image and the time-specific reconstructed PET image that corresponds to the specific axial position ([0090] Calculating the loss function is understood as determining any differences between the output of the learning model and the ground truth image);
and modifying the neural network based on the determined differences between the estimated time-specific reconstructed PET image of the plurality of specific axial position and the corresponding time-specific reconstructed PET image ([0090] Parameters of deep learning model are updated based on calculated loss, i.e. based on the determined differences).
Xi in the embodiment disclosed above (hereafter, embodiment D) does not disclose expressly time-specific reconstructed PET images in training and each histo-image frame corresponding to a time-specific reconstructed image wherein each time-specific reconstructed image corresponds to the specific axial position.
Xi in another embodiment (hereafter, embodiment C) discloses:
and a plurality of time-specific reconstructed PET images ([0128] training is done between input data and corresponding ground truth PET reconstruction image. As the ground truth corresponds to the input data, it is understood to be time-specific as the input data is, as shown above)
and each of the time-referenced histo-image frames corresponding to time-specific reconstructed PET image ([0121] the first PET reconstruction image is generated by the target PET data. Target PET data is time reference histo-image data as taught by [0059] and [0045] above. [0128] The first PET reconstruction image corresponds to a ground truth reconstructed image. Therefore, each histo-frame has a corresponding time-specific reconstructed image), wherein each of the time-specific reconstructed PET images corresponds to at least two time-referenced histo-image frames generated from portions of the dynamic PET data having corresponding positions within the position-time coordinate pairs (The examiner is interpreting this limitation as deleted per the examiner interpretation under 35 U.S.C. 112(b). Additionally, [0128] a plurality of images may be fused together to form a single image. As images are generated from histo-image frames, see [0100] where corrected target PET is in histo-image format [0096], it is understood that fusing several images together means the image may be generated from two or more histo-image frames);
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claim invention to combine the corresponding reconstructed PET images used in training of Xi embodiment C with the invention of Xi embodiment D.
The motivation for doing so would have been that this training “can improve the accuracy of reconstruction and improve the quality of the resulting PET reconstruction image" (Xi, [0134]).
Therefore, it would have been obvious to combine the embodiments of Xi to obtain the invention as specified in claim 19.
Regarding claim 20, Xi discloses the subject matter of claim 19.
Xi further discloses:
wherein each time-specific reconstructed PET image in the plurality of time-specific PET images comprises one or more parameters ([0050] correction data may be determined for the PET data, [0052] correction data weighting is dependent on environmental parameters. Therefore, the PET data includes record of environmental parameters. [0053] environmental parameters include parameters relating to the PET scanning device, the tracer, and the scanned object. These are understood as PET parameters).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20250069289 A1, Lyu et al., discloses a system for generate PET reconstructed images at sub-time periods, i.e. time periods not defined by position-time coordinate pairs.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661