Prosecution Insights
Last updated: April 19, 2026
Application No. 17/187,727

METHODS OF ESTIMATION-BASED SEGMENTATION AND TRANSMISSION-LESS ATTENUATION AND SCATTER COMPENSATION IN NUCLEAR MEDICINE IMAGING

Non-Final OA §103
Filed
Feb 26, 2021
Examiner
HOANG, HAN DINH
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Washington University
OA Round
6 (Non-Final)
74%
Grant Probability
Favorable
6-7
OA Rounds
3y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
120 granted / 162 resolved
+12.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.7%
+25.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered. Claim Objections Claim 6 is objected to because of the following informalities: 6. (Currently amended) should be 6. (withdrawn, Currently amended). Appropriate correction is required. Response to Arguments Applicant’s amendment filed 02/09/2026 has been entered and made of record. Claims 1 and 6 are amended. New Claims 11-14 were added. Claims 6-8 and 13-14 are withdrawn. Claims 1-3, 5 and 11-12 are pending. The Examiner contacted Attorney of Record Michael McCay Reg. No. 64862 on 02/26/2026, in regards to the status of claim 6 as the claim was previously withdrawn and amended. The Attorney mentioned in the call that there was a mistake as the claim is still withdrawn and amended. The Examiner has objected to the claim as a minor informality and an appropriate correction should be made to reflect the status of the claim. Applicant’s arguments with respect to claims 1-3, 5 and 11-12 have been considered but are moot because the new ground of rejection set forth below. The applicant argues on the remarks filed on 02/09/2026, the cited prior art does not explicitly disclose the newly amended limitation of the deep learning network is trained using a training dataset comprising a plurality of high-resolution segmented MRI images and a corresponding plurality of low-resolution segmented nuclear medicine images. The Examiner agrees as the previously cited prior art does not disclose this limitation. However, after further search and consideration, the newly discovered art of Song et al. ("Super-Resolution PET Imaging Using Convolutional Neural Networks") would disclose this limitation. Please see updated claim rejection under 35 USC § 103 below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Sjöstrand et al. US PG-Pub(US 20190209116 A1) in view of Patriarche US PG-Pub(US 20190259197 A1) in view of Hamarneh et al. US PG-Pub(US 20140198979 A1) in further view of Song et al. ("Super-Resolution PET Imaging Using Convolutional Neural Networks"). Regarding Claim 1, Sjöstrand teaches a computer-implemented method for segmenting a nuclear medicine image([0076], “FIGS. 7A-7E present a block diagram of a CNN module architecture (localization network) for identifying a volume of interest (e.g., VOI) corresponding to a pelvic region within a CT image of a subject (wherein the VOI is subsequently processed by a second CNN module for more detailed segmentation/identification of the prostate and/or other tissues within the pelvic region), according to an illustrative embodiment.”, Figs 7A-7E show a CNN performing segmentation of a nuclear image.),the method comprising transforming, using a computing device, a nuclear medicine image dataset comprising a plurality of voxels into a segmented nuclear medicine image dataset comprising a plurality of segmented voxels using a deep learning network(¶[0235], “A bounding box that identifies an initial VOI may be determined from the first segmentation mask set as a smallest box (e.g., rectangular volume) that comprises all voxels labeled as belonging to categories (i)-(iii). Coordinates identifying the bounding box (e.g., coordinates of opposite corners of a rectangular volume corresponding to the bounding box) are determined and output as crop endpoints. In certain embodiments, wherein the preprocessed CT image input to the Localization CNN is a resized version of the original CT image, the coordinates identifying the bounding box are transformed to a coordinate system of the original CT image and output as crop endpoints..” ¶[0235], discloses that the input image to the CNN for training is a segmented image comprising a plurality of voxels with bounding boxes for identifying a volume of interest (e.g., VOI).),)wherein: each segmented voxel is associated with at least one voxel volume fraction(¶[0051] “wherein at least a portion of the plurality of voxels of the 3D functional image represent physical volumes within the pelvic region of the subject; determine, using a first module (e.g., a first machine learning module), an initial volume of interest (VOI) within the 3D anatomical image (e.g., a rectangular prism), the initial VOI corresponding to an anatomical sub-region (e.g., a group of related tissue, such as a pelvic region, a chest region, a head and/or neck region, and the like) comprising the target region (e.g., wherein the VOI excludes more voxels of the 3D anatomical image than it includes; e.g., wherein the VOI includes less than 25% of the voxels of the 3D anatomical image; e.g., wherein a majority of voxels within the VOI represent physical volumes within the anatomical sub-region)”, The examiner is interpreting each voxel to be the voxels associated to the physical volumes in the image. Furthermore, the examiner is interpreting the voxel volume fraction to be the percentage of the voxels associated to with the physical volumes in the image in order to generate a volume of interest. Thus, using the broadest reasonable interpretation Sjöstrand teaches in ¶[0051], the idea of each voxel is associated to a physical volume and then a volume of interest of the volume is segmented based on the percentage of each voxel being indictive of a tissue region.) Sjöstrand does not explicitly teach each voxel volume fraction comprises a value ranging from zero to oneindicating a fraction of the segmented voxel's volume occupied byeach tissue type Patriarche teaches each voxel volume fraction comprises a value ranging from zero to oneindicating a fraction of the segmented voxel's volume occupied byeach tissue type (¶[0021], “FIG. 4 illustrates a three-dimensional medical image. The medical image 402 is represented as a volume containing multiple voxels. Voxels are the three-dimensional equivalents of pixels in two-dimensional images. The multiple voxels completely fill the entire medical-image volume. Each voxel is associated with an intensity value, a floating-point value representing the image intensity within the grayscale range [0,1.0], where 1.0 represents white, 0 represents black, and intermediate fractional values from 0 to 1 represent a sequence of increasingly lighter gray. In general, different types of tissue produce different grayscale values under any given set of instrumental and imaging conditions. Boundaries between tissues are recognized by the grayscale contrast along a generally curved surface separating the two different tissues. A voxel corresponds to a voxel volume within a patient's body.”, as disclosed in ¶[0021], the prior art assigns a voxel fraction which a value of ranging from 0-1 which is used to identify different types of tissues based on the intensity value of the voxel as shown in Figure 6 which illustrates the intensity profile of a particular type of tissue.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Sjöstrand with Patriarche in order to have the voxel fraction comprise a range of 0 to 1 to identify tissue type. One skilled in the art would have been motivated to modify Sjöstrand in this manner in order to seek new and improved automated systems and methods for interpreting medical images to facilitate medical diagnosis and treatment planning. (Patriarche, ¶[0003]) However, the combination of Sjöstrand and Patriarche do not explicitly teach wherein a voxel volume fraction value of zero indicates no fraction of the segmented voxel's volume is occupied by the tissue type and a voxel volume fraction value of one indicates an entire segmented voxel's volume is occupied by the tissue type. Hamarneh teaches wherein a voxel volume fraction value of zero indicates no fraction of the segmented voxel's volume is occupied by the tissue type and a voxel volume fraction value of one indicates an entire segmented voxel's volume is occupied by the tissue type (¶[0007] “Image segmentation can be represented in various ways, such as by description of segment boundaries (e.g., by curves, surfaces, and the like), and values assigned to image regions (e.g., values assigned to pixels, voxels or other coordinates corresponding to image locations). Values assigned to image regions may comprise labels (e.g., indicating that the labelled image region has, or has been determined to have, a particular characteristic), or probabilities (e.g., indicating the probability that the corresponding region has a particular characteristic). For example, an image segmentation may assign every voxel in a 3D image a label having a value of 0 or 1 to indicate that the voxel belongs to the background or a foreground object, or may assign every voxel in a 3D image a probability value in the range of 0 to 1 to indicate that probability that the voxel belongs to a particular object. In some cases, there may be multiple objects (e.g. multiple types of tissues) within a given 3D image and an image segmentation may assign multiple probabilities to each voxel in the image, with each probability representing a likelihood that the voxel corresponds to a particular one of the multiple objects.”, as disclosed in this section of the prior art, the segmented voxels are assigned values and if the value is 1 then the segmented voxel corresponds to a tissue type and if the value is 0 then it corresponds to a region that is not the tissue area.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Sjöstrand and Patriarche with Hamarneh in order to determine if the segmented voxel volume is a specific tissue type based on the value assigned. One skilled in the art would have been motivated to modify Sjöstrand and Patriarche in this manner in order to identify the low-confidence region from among a plurality of candidate regions. (Hamarneh, Abstract) However, Sjöstrand, Patriarche and Hamarneh do not explicitly teach and c. the deep learning network is trained using a training dataset comprising a plurality of high-resolution segmented MRI images and a corresponding plurality of low-resolution segmented nuclear medicine images. Song teaches and c. the deep learning network is trained using a training dataset comprising a plurality of high-resolution segmented MRI images and a corresponding plurality of low-resolution segmented nuclear medicine images. (Fig. 1. CNN architecture for SR PET. The network uses up to 4 inputs: (i) LR PET(the main input), (ii) HR MR, (iii) radial locations, and (iv) axial locations. Figure 1 shows the CNN uses training images of a low-resolution pet image and high resolution MR image to train the CNN. Right Col, Page 519, “As illustrated in the schematic in Fig. 1, we employ CNNs with multi channel inputs, that include LR PET and HR MR input channels.” It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Sjöstrand, Patriarche, Hamarneh and Creswell with Song in order to use both MRI and PET images in the training data set. One skilled in the art would have been motivated to modify Sjöstrand, Patriarche, Hamarneh and Creswell and in this manner in order to present a super-resolution (SR) imaging technique for PET based on convolutional neural networks(CNNs). To facilitate the resolution recovery process. (Song, Abstract) Regarding Claim 5, the combination of Sjöstrand, Patriarche, Hamarneh and Song teach the method of claim 1, where Sjöstrand further teaches wherein the nuclear medicine image is selected from the group consisting of a PET image and a SPECT image. ([0006], “An oncologist may use images from a targeted PET or SPECT study of a patient as input in her assessment of whether the patient has a particular disease, e.g., prostate cancer, what stage of the disease is evident, what the recommended course of treatment (if any) would be, whether surgical intervention is indicated, and likely prognosis. [0015] Accordingly, the image analysis approaches described herein utilize convolutional neural networks (CNNs) to accurately identify a prostate volume within the CT image that corresponds to the prostate of the subject. The identified prostate volume can be used to identify those voxels of the SPECT image that also correspond to the subject's prostate.” As disclosed in ¶[0006], the nuclear medicine image being selected in either a PET or SPECT image and ¶[0015] discloses the prior art using a SPECT image as an input to a CNN for training the model to identify certain voxels.) Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Sjöstrand et al. US PG-Pub(US 20190209116 A1) in view of Patriarche US PG-Pub(US 20190259197 A1) in view of Hamarneh et al. US PG-Pub(US 20140198979 A1) in view of Song et al. (US 20180061058 A1).in further view of Creswell et al. ("On denoising autoencoders trained to minimise binary cross-entropy") . Regarding Claim 2, while the combination of Sjöstrand, Patriarche, Hamarneh and Song teaches the method of claim 1, They do not explicitly teach wherein the deep learning network is configured to minimize a binary cross-entropy (BCE) of a Bayesian cost function to estimate the posterior mean of the one tissue type within each voxel. Creswell teaches wherein the deep learning network is configured to minimize a binary cross-entropy (BCE) of a Bayesian cost function to estimate the posterior mean of the one tissue type within each voxel. (As seen on Page 2, Theory Section it shows a method in which the binary cross-entropy is being minimized and in Page 6, Section 6, Discussion, Paragraph 1 discloses the that by reducing the binary cross entropy it leads to better detection between the data samples.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Sjöstrand, Patriarche, Hamarneh and Song with Creswell in order to incorporate an existing autoencoder network that minimizes the loss in the image while training the network. One skilled in the art would have been motivated to modify Sjöstrand, Patriarche, Hamarneh and Song in this manner in order to recover clean versions of corrupted input samples of the network. (Creswell Page 1, Introduction) Regarding Claim 3, the combination of Sjöstrand, Patriarche, Hamarneh, Song and Creswell teach the method of claim 2, where Creswell further teaches wherein the deep learning network comprises an autoencoder-decoder architecture (Page 2, 4.1 Setup, Paragraph 1, “For our experiments we train denoising variants of two state-of-the-art generative autoencoder models—the variational autoencoder (VAE) [4, 5] and the adversarial autoencoder (AAE) [6]—for the two reasons specified above”, As disclosed in this section of the prior art the architecture of the network is an autoencoder-decoder architecture.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Sjöstrand, Patriarche, Hamarneh and Song with Creswell in order to incorporate an existing autoencoder network to the current neural network architecture. One skilled in the art would have been motivated to modify Sjöstrand, Patriarche, Hamarneh and Song in this manner in order to provide a powerful means of learning useful representations of observed data through supervised learning. (Creswell Page 1, Introduction) Allowable Subject Matter Claims 11-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN HOANG/Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Feb 26, 2021
Application Filed
Sep 01, 2023
Non-Final Rejection — §103
Mar 08, 2024
Response Filed
Apr 13, 2024
Non-Final Rejection — §103
Oct 18, 2024
Response Filed
Nov 02, 2024
Final Rejection — §103
Jan 07, 2025
Response after Non-Final Action
Feb 06, 2025
Request for Continued Examination
Feb 07, 2025
Response after Non-Final Action
Feb 18, 2025
Non-Final Rejection — §103
Aug 11, 2025
Interview Requested
Aug 21, 2025
Response Filed
Sep 20, 2025
Final Rejection — §103
Dec 09, 2025
Interview Requested
Dec 16, 2025
Examiner Interview Summary
Dec 16, 2025
Applicant Interview (Telephonic)
Feb 09, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602835
POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602778
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12602918
LEARNING DATA GENERATING APPARATUS, LEARNING DATA GENERATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM HAVING LEARNING DATA GENERATING PROGRAM RECORDED THEREON
2y 5m to grant Granted Apr 14, 2026
Patent 12592070
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586364
SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month