DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sun (CN 110215226 A, Image Attenuation Correction Method, Device, Computer Device And Storage Medium), in view of Shen at al. (CN 115393460 A, PET Image Attenuation Correction Method, Device And Device), hereinafter, “Shen”.
Regarding claim 1 Sun teaches, an attenuation corrector configured to generate Computed Tomography- (CT-) based attenuation correction data from CT image data (Please note page 3, second paragraph. As indicated a method capable of reducing image attenuation correction method using a CT image); a Positron Emission Tomography (PET) reconstructor configured to reconstruct first PET image data based on PET projection data and the CT-based attenuation correction data (Please note page 3, second paragraph. As indicated. As indicated a method capable of reducing image attenuation correction method using a CT image after the PET image reconstruction of PET image respiratory artifact); an attenuation correction artifact mitigator configured to analyze the first PET image data for a presence of attenuation correction artifact (Please note, page 7, 4th. Paragraph. As indicated performing attenuation correction for matching frame PET scan data according to the correction image, so as to obtain PET attenuation-corrected reconstructed image without respiratory artifact); wherein the PET reconstructor is further configured to reconstruct second PET image data based on the PET projection data and the modified attenuation correction data. (Please note, page 15, second paragraph. As indicated the matching frame of PET data with the correction image in the same breath phase or the same amplitude of breathing, the matching frame of PET data and corrected image for attenuation correction so as to obtain PET attenuation-corrected reconstructed image without respiratory artifact.).
Sun does not expressly teach, an inference engine configured to predict attenuation correction data based on non-attenuation corrected PET image data in response to the presence of attenuation correction artifact in the first PET image data; and an attenuation correction data updater configured to generate modified attenuation correction data based on the CT-based attenuation correction data and the predicted attenuation correction data.
Shen teaches, an inference engine configured to predict attenuation correction data based on non-attenuation corrected PET image data in response to the presence of attenuation correction artifact in the first PET image data and an attenuation correction data updater configured to generate modified attenuation correction data based on the CT-based attenuation correction data and the predicted attenuation correction data. (Please note, page 5, second paragraph. As indicated the human tissue distribution prediction map and the non-attenuation correction PET image input to the second generative adversarial network of the first generative adversarial network, to output to obtain the corresponding virtual CT image, comprising: inputting the non-attenuation correction PET image and the human tissue distribution prediction image to the generator network of the second generative adversarial network.).
Sun & Shen are combinable because they are from the same field of endeavor.
At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this non-attenuation corrected PET image data of Shen in Sun’s invention.
The suggestion/motivation for doing so would have been as indicated on page 5, second paragraph, “so as to obtain the CT prediction image with rich bone structure.”.
Therefore, it would have been obvious to combine Shen with Sun to obtain the invention as specified in claim 1.
Regarding claim 2, Shen recites, wherein the attenuation correction artifact mitigator employs a trained network to identify attenuation correction artifact in PET image data sets. (Please note, page 5, last paragraph. As indicated a PET image attenuation correction method provided by the application, device and device, has the following beneficial effects: The application a human bone tissue enhanced generation frame (BEGF) constructed based on the generative adversarial network and based on the intensity similarity and tissue consistency of the CT image generated by the BEGF network constructed by the application reaches a better effect, so as to improve the accuracy of PET attenuation correction.).
Regarding claim 3, Shen recites, wherein the trained network includes a trained classifier. (In this regard, with respect to Generative Adversarial Networks, they involve a trained classifier.).
Regarding claim 4, Shen recites, wherein the trained network includes a trained segmentation network. (Please note, page 13, first paragraph. As indicated the first generative adversarial network the semantic information in the human tissue distribution prediction map is constrained by segmentation consistency constraint. the second generative adversarial network also can adopt Dice loss function for projection consistency constraint to restrict the virtual CT image synthesized in the projection domain.).
Regarding claim 5, Shen recites, wherein the inference engine invokes reconstruction of non-attenuation corrected PET image data and non-attenuation corrected PET projection data. (Please note, Abstract of the invention. As indicated a PET image attenuation correction method, device and device, comprising: obtaining the non-attenuation correction PET image and inputting it to the first generative adversarial network inputting the human tissue distribution prediction image and the non-attenuation correction PET image to a second generative adversarial network generative adversarial network and performing attenuation correction on the non-attenuation correction PET image by using the virtual CT image to obtain attenuation correction PET image.).
Regarding claim 6, Shen recites, wherein the CT-based attenuation correction data includes a CT-based attenuation map, and the inference engine includes a trained attenuation and scatter network configured to predict PET image data that does not include attenuation correction artifact based on the non-attenuation corrected PET image data. (Please note, page 3, last paragraph. As indicated the obtaining non-attenuation correction PET image and input to the first generative adversarial network, to output to obtain the human tissue distribution prediction map, comprising: obtaining a non-attenuation correction PET image and inputting to the generator network of the first generative adversarial network to obtain the human tissue distribution prediction map; obtaining the real CT image for filtering processing, and dividing the threshold value according to the pixel value to obtain the real human tissue distribution map.).
Regarding claim 7, Shen recites, wherein the attenuation correction data updater is further configured to register CT-based attenuation correction data to the predicted PET image data to generate the modified attenuation correction data. (Please note, page 5, second paragraph. As indicated inputting the non-attenuation correction PET image and the human tissue distribution prediction image to the generator network of the second generative adversarial network, so as to obtain the CT prediction image with rich bone structure; inputting the real CT image and the CT prediction image with abundant bone structure to the second generative adversarial network, and training the generator network of the second generative adversarial network based on the countermeasure principle to output the virtual CT image.).
Regarding claim 8, Shen recites, wherein the CT-based attenuation correction data includes a CT-based attenuation map, and the inference engine includes a trained deep learning network configured to predict an attenuation map based on the non-attenuation corrected PET image data. (Please note, page 5, paragraph 5. As indicated tissue prediction module, for obtaining non-attenuation correction PET image and input to the first generative adversarial network, to output to obtain the human tissue distribution prediction map; a virtual training module, for inputting the human tissue distribution prediction image and the non-attenuation correction PET image to the second generative adversarial network generative adversarial network an attenuation correction module, used for using the virtual CT image to attenuate and correct the non-attenuation correction PET image to obtain attenuation correction PET image. In this regard, a generative adversarial network is a deep learning network.).
Regarding claim 9, Shen recites, wherein the attenuation correction data updater is further configured to modify the CT-based attenuation map based on the predicted attenuation map to generate the modified attenuation correction data. (Please note, page 9, 4th. Paragraph. As indicated the obtained non-attenuation correction PET image is input, to obtain the real human tissue distribution map is output, the generator network of the first generative adversarial network for training to make it output human tissue distribution prediction map. obtaining the real CT image for filtering processing, and dividing the threshold value according to the pixel value to obtain the real human tissue distribution map.).
Regarding claim 10, Shen recites, wherein the attenuation correction data updater is further configured to modify the CT-based attenuation map based on one or more user selectable constraints. (Please note, page 5, 4th. Paragraph. As indicated the second generative adversarial network using the average absolute error loss function for projection consistency constraint, comprising: respectively transforming the virtual CT image and the obtained real CT image through Radon to obtain the corresponding virtual CT projection and real CT projection; using the average absolute error loss function to restrict the virtual CT projection and the real CT projection in accordance with each other.).
Regarding claims (11,12-15), analysis similar to those presented for claims (1, 6-9), respectively, are applicable.
Regarding claims (16,17-20), analysis similar to those presented for claims (1, 6-9), respectively, are applicable.
Examiner’s Note
The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well.
It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIR ALAVI/Primary Examiner, Art Unit 2668 Friday, February 13, 2026