Prosecution Insights
Last updated: April 19, 2026
Application No. 18/631,817

METHOD, SYSTEM AND/OR COMPUTER READABLE MEDIUM FOR MITIGATING ATTENUATION CORRECTION ARTIFACT IN PET DATA

Non-Final OA §103
Filed
Apr 10, 2024
Examiner
ALAVI, AMIR
Art Unit
2668
Tech Center
2600 — Communications
Assignee
GE Precision Healthcare LLC
OA Round
1 (Non-Final)
94%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 94% — above average
94%
Career Allow Rate
1083 granted / 1156 resolved
+31.7% vs TC avg
Minimal +4% lift
Without
With
+3.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
23 currently pending
Career history
1179
Total Applications
across all art units

Statute-Specific Performance

§101
23.0%
-17.0% vs TC avg
§103
20.2%
-19.8% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1156 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sun (CN 110215226 A, Image Attenuation Correction Method, Device, Computer Device And Storage Medium), in view of Shen at al. (CN 115393460 A, PET Image Attenuation Correction Method, Device And Device), hereinafter, “Shen”. Regarding claim 1 Sun teaches, an attenuation corrector configured to generate Computed Tomography- (CT-) based attenuation correction data from CT image data (Please note page 3, second paragraph. As indicated a method capable of reducing image attenuation correction method using a CT image); a Positron Emission Tomography (PET) reconstructor configured to reconstruct first PET image data based on PET projection data and the CT-based attenuation correction data (Please note page 3, second paragraph. As indicated. As indicated a method capable of reducing image attenuation correction method using a CT image after the PET image reconstruction of PET image respiratory artifact); an attenuation correction artifact mitigator configured to analyze the first PET image data for a presence of attenuation correction artifact (Please note, page 7, 4th. Paragraph. As indicated performing attenuation correction for matching frame PET scan data according to the correction image, so as to obtain PET attenuation-corrected reconstructed image without respiratory artifact); wherein the PET reconstructor is further configured to reconstruct second PET image data based on the PET projection data and the modified attenuation correction data. (Please note, page 15, second paragraph. As indicated the matching frame of PET data with the correction image in the same breath phase or the same amplitude of breathing, the matching frame of PET data and corrected image for attenuation correction so as to obtain PET attenuation-corrected reconstructed image without respiratory artifact.). Sun does not expressly teach, an inference engine configured to predict attenuation correction data based on non-attenuation corrected PET image data in response to the presence of attenuation correction artifact in the first PET image data; and an attenuation correction data updater configured to generate modified attenuation correction data based on the CT-based attenuation correction data and the predicted attenuation correction data. Shen teaches, an inference engine configured to predict attenuation correction data based on non-attenuation corrected PET image data in response to the presence of attenuation correction artifact in the first PET image data and an attenuation correction data updater configured to generate modified attenuation correction data based on the CT-based attenuation correction data and the predicted attenuation correction data. (Please note, page 5, second paragraph. As indicated the human tissue distribution prediction map and the non-attenuation correction PET image input to the second generative adversarial network of the first generative adversarial network, to output to obtain the corresponding virtual CT image, comprising: inputting the non-attenuation correction PET image and the human tissue distribution prediction image to the generator network of the second generative adversarial network.). Sun & Shen are combinable because they are from the same field of endeavor. At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize this non-attenuation corrected PET image data of Shen in Sun’s invention. The suggestion/motivation for doing so would have been as indicated on page 5, second paragraph, “so as to obtain the CT prediction image with rich bone structure.”. Therefore, it would have been obvious to combine Shen with Sun to obtain the invention as specified in claim 1. Regarding claim 2, Shen recites, wherein the attenuation correction artifact mitigator employs a trained network to identify attenuation correction artifact in PET image data sets. (Please note, page 5, last paragraph. As indicated a PET image attenuation correction method provided by the application, device and device, has the following beneficial effects: The application a human bone tissue enhanced generation frame (BEGF) constructed based on the generative adversarial network and based on the intensity similarity and tissue consistency of the CT image generated by the BEGF network constructed by the application reaches a better effect, so as to improve the accuracy of PET attenuation correction.). Regarding claim 3, Shen recites, wherein the trained network includes a trained classifier. (In this regard, with respect to Generative Adversarial Networks, they involve a trained classifier.). Regarding claim 4, Shen recites, wherein the trained network includes a trained segmentation network. (Please note, page 13, first paragraph. As indicated the first generative adversarial network the semantic information in the human tissue distribution prediction map is constrained by segmentation consistency constraint. the second generative adversarial network also can adopt Dice loss function for projection consistency constraint to restrict the virtual CT image synthesized in the projection domain.). Regarding claim 5, Shen recites, wherein the inference engine invokes reconstruction of non-attenuation corrected PET image data and non-attenuation corrected PET projection data. (Please note, Abstract of the invention. As indicated a PET image attenuation correction method, device and device, comprising: obtaining the non-attenuation correction PET image and inputting it to the first generative adversarial network inputting the human tissue distribution prediction image and the non-attenuation correction PET image to a second generative adversarial network generative adversarial network and performing attenuation correction on the non-attenuation correction PET image by using the virtual CT image to obtain attenuation correction PET image.). Regarding claim 6, Shen recites, wherein the CT-based attenuation correction data includes a CT-based attenuation map, and the inference engine includes a trained attenuation and scatter network configured to predict PET image data that does not include attenuation correction artifact based on the non-attenuation corrected PET image data. (Please note, page 3, last paragraph. As indicated the obtaining non-attenuation correction PET image and input to the first generative adversarial network, to output to obtain the human tissue distribution prediction map, comprising: obtaining a non-attenuation correction PET image and inputting to the generator network of the first generative adversarial network to obtain the human tissue distribution prediction map; obtaining the real CT image for filtering processing, and dividing the threshold value according to the pixel value to obtain the real human tissue distribution map.). Regarding claim 7, Shen recites, wherein the attenuation correction data updater is further configured to register CT-based attenuation correction data to the predicted PET image data to generate the modified attenuation correction data. (Please note, page 5, second paragraph. As indicated inputting the non-attenuation correction PET image and the human tissue distribution prediction image to the generator network of the second generative adversarial network, so as to obtain the CT prediction image with rich bone structure; inputting the real CT image and the CT prediction image with abundant bone structure to the second generative adversarial network, and training the generator network of the second generative adversarial network based on the countermeasure principle to output the virtual CT image.). Regarding claim 8, Shen recites, wherein the CT-based attenuation correction data includes a CT-based attenuation map, and the inference engine includes a trained deep learning network configured to predict an attenuation map based on the non-attenuation corrected PET image data. (Please note, page 5, paragraph 5. As indicated tissue prediction module, for obtaining non-attenuation correction PET image and input to the first generative adversarial network, to output to obtain the human tissue distribution prediction map; a virtual training module, for inputting the human tissue distribution prediction image and the non-attenuation correction PET image to the second generative adversarial network generative adversarial network an attenuation correction module, used for using the virtual CT image to attenuate and correct the non-attenuation correction PET image to obtain attenuation correction PET image. In this regard, a generative adversarial network is a deep learning network.). Regarding claim 9, Shen recites, wherein the attenuation correction data updater is further configured to modify the CT-based attenuation map based on the predicted attenuation map to generate the modified attenuation correction data. (Please note, page 9, 4th. Paragraph. As indicated the obtained non-attenuation correction PET image is input, to obtain the real human tissue distribution map is output, the generator network of the first generative adversarial network for training to make it output human tissue distribution prediction map. obtaining the real CT image for filtering processing, and dividing the threshold value according to the pixel value to obtain the real human tissue distribution map.). Regarding claim 10, Shen recites, wherein the attenuation correction data updater is further configured to modify the CT-based attenuation map based on one or more user selectable constraints. (Please note, page 5, 4th. Paragraph. As indicated the second generative adversarial network using the average absolute error loss function for projection consistency constraint, comprising: respectively transforming the virtual CT image and the obtained real CT image through Radon to obtain the corresponding virtual CT projection and real CT projection; using the average absolute error loss function to restrict the virtual CT projection and the real CT projection in accordance with each other.). Regarding claims (11,12-15), analysis similar to those presented for claims (1, 6-9), respectively, are applicable. Regarding claims (16,17-20), analysis similar to those presented for claims (1, 6-9), respectively, are applicable. Examiner’s Note The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR ALAVI/Primary Examiner, Art Unit 2668 Friday, February 13, 2026
Read full office action

Prosecution Timeline

Apr 10, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §103
Apr 14, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597232
SYSTEM FOR LEARNING NEW VISUAL INSPECTION TASKS USING A FEW-SHOT META-LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12573189
PROCESSING METHOD AND PROCESSING DEVICE USING SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12567238
GENERATING A DATA STRUCTURE FOR SPECIFYING VISUAL DATA SETS
2y 5m to grant Granted Mar 03, 2026
Patent 12561950
AI System and Method for Automatic Analog Gauge Reading
2y 5m to grant Granted Feb 24, 2026
Patent 12561774
SYSTEM AND METHOD FOR REAL-TIME TONE-MAPPING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
94%
Grant Probability
97%
With Interview (+3.6%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 1156 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month