Prosecution Insights
Last updated: April 19, 2026
Application No. 18/552,913

METHOD OF NON-INVASIVE MEDICAL TOMOGRAPHIC IMAGING WITH UNCERTAINTY ESTIMATION

Non-Final OA §101§103
Filed
Sep 28, 2023
Examiner
BURLESON, MICHAEL L
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Frontwave Imaging S L
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
68%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
365 granted / 489 resolved
+12.6% vs TC avg
Minimal -6% lift
Without
With
+-6.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
36 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
55.2%
+15.2% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 489 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/28/23 and 12/28/23 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim 35 is rejected under 35 U.S.C. 101 because claim 35 recites a program. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the Applicant in overcoming a rejection or potential rejection under 35 U.S.C. 101 in this situation, the Examiner suggests the following approach: a claim drawn to such a computer readable storage medium storing a program that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. 101 by adding the limitation “non-transitory” to the claim. The Examiner respectfully suggest changing the claim to recite “A non-transitory computer readable medium storing instructions that causes a computer...”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6-8, 10, 12, 15, 16, 19-21, 23, 25, 30, 31, 34, 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al US 20210169432 in view of Anirudh et al US 2020/0372308. Regarding claim 1. Liu et al teaches A method of generating tomographic medical image data representative of at least a part of a body of a subject, the method intended for non-invasive imaging of regions of the body (projection data may include raw data generated by the medical device 110 (paragraph 0045). the medical device 110 may be a non-invasive biomedical medical imaging device for disease diagnostic or research purposes. medical device 110 may include a single modality scanner and/or a multi-modality scanner. The single modality scanner may include, for example, an computed tomography (CT) scanner (paragraph 0046), and the method comprising the steps of: providing a tomographic observed data set derived from a tomographic measurement of the at least a part of the body of the subject, the tomographic observed data set comprising a plurality of observed data values (the CM system need only perform one set of learning iterations for each observed image or collection of observed images (paragraph 0018). the medical image data relating to the subject may include projection data, one or more images of the subject (tomographic observed data set), The projection data may include raw data generated by the medical device 110 by scanning the subject and/or data generated by a forward projection on an image of the subject (paragraph 0045); d) generating, utilising the one or more model coefficients a predicted tomographic data set comprising a plurality of predicted data values representative of at least one physical parameter (preliminary model may extract one or more image features (e.g., a low-level feature (e.g., an edge feature, a texture feature), a high-level feature (e.g., a semantic feature), or a complicated feature (e.g., a deep hierarchical feature) of the specific sample image data. Based on the extracted image features, the preliminary model may determine a predicted output (i.e., a predicted body region) of the specific group of the training samples (paragraph 0097); g) utilising the updated generative model to generate tomographic medical image data representative of at least a part of the body of the subject for medical analysis (parameter values of the preliminary model may be adjusted and/or updated in order to decrease the value of the cost function (i.e., the difference between the predicted body region and the sample body region) to smaller than the threshold, and an intermediate model may be generated (paragraph 0100); and Liu et al fails to teach b) providing a generative model comprising one or more latent parameters representative of statistical behaviour of spatial structures of one or more reconstructed medical images; c) generating, utilising the generative model and from the one or more latent parameters a spatial model having a plurality of model coefficients; e) modifying, utilising a gradient-based method, one or more objective functions operable to compare the observed and predicted data values by modifying one or more of the latent parameters to generate updated latent parameters; f) updating the generative model utilising the updated latent parameters to produce an updated generative model; wherein the coefficients of the spatial model define a spatial distribution of the at least one physical parameter which are then used to define values of one or more image elements of a reconstructed image. Anirudh et al teach b) providing a generative model comprising one or more latent parameters representative of statistical behaviour of spatial structures of one or more reconstructed medical images (CM system selects an initial latent vector to begin the alternating optimization. Depending on the initial latent vector that is selected, there may be large variabilities in convergence behavior to the choice of the seed (paragraph 0035). the component invokes an learn latent vectors component to update the latent vectors. In decision block 305, the overall termination criterion is satisfied, then the component completes indicating the modeled images as the uncorrupted versions of the observed images (reconstructed images) (paragraph 0038) generates modeled images by applying the generative model to the latent vectors (paragraph 0042) Note: the latent vector represents convergence behavior of the seed, which reds on latent parameter representative of statistical behavior in recreating uncorrupted versions of observed images (reconstructed images); c) generating, utilising the generative model and from the one or more latent parameters a spatial model having a plurality of model coefficients (applies the generator to the latent vectors z to generate modeled images custom-character(z). The CM system applies the CMN to the modeled images custom-character(z) to generate new corrupted images (paragraph 0016). CM system may employ a generator trained using a GAN based on training images (real images) to generate uncorrupted images from latent vectors (paragraph 0017) the component applies the generator to the latent vector for the indexed observed image to generate a modeled image for the indexed observed image and loops to block 401 (paragraph 0039) (Note: the use of generate model images, which read on generative model and latent vectors z, which read on latent parameters); e) modifying, utilising a gradient-based method, one or more objective functions operable to compare the observed and predicted data values by modifying one or more of the latent parameters to generate updated latent parameters (The learn latent vector component 500 is invoked to learn the latent vectors given a trained CMN. The learned latent vectors are used to generate the modeled images that are used when training the CMN during the next overall iteration. the component applies the generator to a latent vector for the selected observed image to generate a modeled image. In block 504, the component applies the CMN to the modeled image to generate a corrupted image and then loops (paragraph 0040); f) updating the generative model utilising the updated latent parameters to produce an updated generative model (the learned latent vectors are used to generate the modeled images that are used when training the CMN during the next overall iteration (updating). the component applies the generator to a latent vector for the selected observed image to generate a modeled image. In block 504, the component applies the CMN to the modeled image to generate a corrupted image and then loops (paragraph 0040); wherein the coefficients of the spatial model define a spatial distribution of the at least one physical parameter which are then used to define values of one or more image elements of a reconstructed image (the generator can be used to generate a “real” image (reconstructed image) given a latent vector (paragraph 0017). corruption function ƒ can correspond to a broad class of functions including a systematic distributional shift (e.g., changes to pixel intensities) (spatial distribution of physical parameter, the physical parameter would be the intensity of the pixel) (paragraph 0020). Therefore, it would have been obvious to a person of ordinary skill in the art to modify Liu et al to include: b) providing a generative model comprising one or more latent parameters representative of statistical behaviour of spatial structures of one or more reconstructed medical images; c) generating, utilising the generative model and from the one or more latent parameters a spatial model having a plurality of model coefficients; e) modifying, utilising a gradient-based method, one or more objective functions operable to compare the observed and predicted data values by modifying one or more of the latent parameters to generate updated latent parameters; f) updating the generative model utilising the updated latent parameters to produce an updated generative model; wherein the coefficients of the spatial model define a spatial distribution of the at least one physical parameter which are then used to define values of one or more image elements of a reconstructed image. The reason for doing so would be to train models to accurately analyze medical images. Regarding claim 2, Liu et al in view of Anirudh et al teaches wherein step g) further comprises h) generating one or more reconstructed medical images representative of the at least a part of the body of the subject (Liu et al: medical device 110 may generate or provide medical image data related to a subject via scanning the subject. For example, the subject may include a specific portion of a body, such as the head, the thorax, the abdomen, or the like, or a combination thereof (paragraph 0045) image data (e.g., a PET image) of the subject (part of body) may be reconstructed based on the PET projection data using a PET image reconstruction technique (paragraph 0088). Regarding claim 3, Liu et al in view of Anirudh et al teaches wherein the or each reconstructed medical image comprises a plurality of image elements representative of values of at least one physical parameter (Liu et al: medical device 110 may generate or provide medical image data related to a subject via scanning the subject. For example, the subject may include a specific portion of a body, such as the head, the thorax, the abdomen, or the like, or a combination thereof (paragraph 0045) the at least one reconstruction parameter corresponding to the at least one scan area of the subject may be automatically determined, and the image of the subject may further be reconstructed based on the at least one reconstruction parameter (paragraph 0121). Regarding claim 6, Liu et al in view of Anirudh et al teaches wherein a plurality of likely reconstructed medical images is generated, the range of reconstructed medical images being indicative of uncertainty (Liu et al: the medical system 100 may include modules and/or components for performing imaging, treatment, and/or related analysis. In some embodiments, the medical image data relating to the subject may include projection data, one or more images of the subject, etc. The projection data may include raw data generated by the medical device 110 by scanning the subject and/or data generated by a forward projection on an image of the subject (paragraph 0045) The image data (e.g., a PET image) of the subject may be reconstructed based on the PET projection data using a PET image reconstruction technique (paragraph 0088) Note: the projection data, from the medical device, is reconstructed. The projection data is read as likely medical images since it is reconstructed images of desired subject Regarding claim 7, Liu et al in view of Anirudh et al teaches wherein step h) further comprises generating an image representative of the difference between the latent parameters provided in step b) and the updated latent parameters (Anirudh et al: When the overall termination criterion is satisfied, the images generated by applying the generator to the latent vectors as last updated represent uncorrupted versions of the observed images (paragraph 0013). Therefore, it would have been obvious to a person of ordinary skill in the art to modify Liu et al to include: wherein step h) further comprises generating an image representative of the difference between the latent parameters provided in step b) and the updated latent parameters The reason for doing so would be to train models to accurately analyze medical images. Regarding claim 8, Liu et al in view of Anirudh et al teaches wherein step g) comprises generating the tomographic medical image data from a plurality of model coefficients of a spatial model generated from the updated generative model (Anirudh et al: the component applies the generator to a latent vector for the selected observed image to generate a modeled image. In block 504, the component applies the CMN to the modeled image to generate a corrupted image and then loops at block 501. In block 505, the component calculates the loss function. In block 506, the component increments an index to index through the observed images. In decision block 507, if all the observed images have already been selected, then the component continues at block 509, else the component continues at block 508. In block 508, the component adjusts the latent vectors for the indexed observed images based on a gradient of the loss function and loops to block 506 (paragraph 0040). Therefore, it would have been obvious to a person of ordinary skill in the art to modify Liu et al to include: wherein step g) comprises generating the tomographic medical image data from a plurality of model coefficients of a spatial model generated from the updated generative model The reason for doing so would be to train models to accurately analyze medical images. Regarding claim 10, Liu et al in view of Anirudh et al teaches wherein step d) comprises generating the predicted tomographic data set utilising a physics-based model defining a numerical simulation of known physics (Liu et al: Based on the extracted image features, the preliminary model may determine a predicted output (i.e., a predicted body region) of the specific group of the training samples. The predicted output (i.e., the predicted body region) of the specific group of training samples may then be compared with the sample body region of the specific group of training samples based on a cost function (paragraph 0097). Regarding claim 12, Liu et al in view of Anirudh et al teaches wherein the physics-based model comprises a machine learning component (Liu et al: preliminary model may determine a predicted output (i.e., a predicted body region) of the specific group of the training samples (paragraph 0097). Regarding claim 15, Liu et al in view of Anirudh et al teaches wherein the generative model is operable to perform unsupervised machine learning (Liu et al: preliminary model may include a generative adversarial network (GAN) model, etc. The training of the preliminary model may be implemented according to a machine learning algorithm. machine learning algorithm used to generate the identification model may be an unsupervised learning algorithm, or the like (paragraph 0096) Regarding claim 16, Liu et al in view of Anirudh et al teaches wherein the generative model comprises a neural network (Liu et al: preliminary model may include an artificial neural network (ANN) (paragraph 0096). Regarding claim 19, Liu et al in view of Anirudh et al teaches wherein step b) further comprises training the generative model utilising prior information comprising one or more sample data sets (Liu et al: To train an identification model, a plurality of groups of training samples may be used. A group of the plurality of groups of training samples may include sample image data of a sample subject and sample region(s) of the sample subject corresponding to the sample image data. (paragraph 0096) Regarding claim 20, Liu et al in view of Anirudh et al teaches wherein the or each sample data set comprises spatial structures representative of one or more reconstructed medical images (Liu et al: images corresponding to different scan areas of the subject, different reconstruction parameters may be applied to achieve a good image quality (paragraph 0121) Regarding claim 21, Liu et al in view of Anirudh et al teaches wherein the or each sample data set comprises one or more ground truth annotations and/or one or more natural images (Liu et al: projection data may include raw data (natural images) generated by the medical device 110 by scanning the subject and/or data generated by a forward projection on an image of the subject (paragraph 0045). Regarding claim 23, Liu et al in view of Anirudh et al teaches wherein tomographic observed data set comprises ultrasound image data of the subject acquired from an ultrasound tomographic measurement (Liu et al: medical device 110 may include a single modality scanner and/or a multi-modality scanner. The single modality scanner may include, for example, an ultrasound scanner (paragraph 0045-0046) Note: the medical device 110 generates medical image data, since the medical device 110 can be ultrasound scanner, the medical images would read on ultrasound image data of the subject. Regarding claim 25, Liu et al in view of Anirudh et al teaches wherein the tomographic observed data set comprises X-ray computed tomography image data of the subject acquired from an X-ray computed tomographic measurement (Liu et al: medical device 110 may include a single modality scanner and/or a multi-modality scanner. The single modality scanner may include, for example, an X-ray scanner (paragraph 0045-0046) Note: the medical device 110 generates medical image data, since the medical device 110 can be X-ray scanner, the medical images would read on ultrasound image data of the subject. Regarding claim 30, Liu et al in view of Anirudh et al teaches wherein step e) utilises automatic differentiation or adjoint-state methods (Liu et al: posture parameter(s) may include a position (e.g., a coordinate in a coordinate system) of a portion (e.g., the head, the neck, a hand, a leg, and/or a foot) of the subject, a joint angle of a joint (e.g., a shoulder joint, a knee joint, an elbow joint, and/or an ankle joint) of the subject (paragraph 0128). Regarding claim 31, Liu et al in view of Anirudh et al teaches wherein the model coefficients of the spatial model are representative of the spatial distribution of at least one physical model parameter (Liu et al: feature point (model coefficient) may correspond to a point of interest (POI) of the subject, such as an anatomical joint (e.g., a shoulder joint, a knee joint, an elbow joint, an ankle joint, a wrist joint) or another specific physical point in a body region (paragraph 0117). Regarding claim 34, Liu et al in view of Anirudh et al teaches A computer system comprising a processing device configured to perform the method of any one of the preceding claim 1 (Liu et al: fig 1) Regarding claim 35, Liu et al in view of Anirudh et al teaches A computer readable medium comprising instructions configured when executed to perform the method of any one of claim 1 to 33 (Liu et al: paragraph 0035) Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al US 20210169432 in view of Anirudh et al US 2020/0372308 further in view of Mandlekar et al US 20220055689. Regarding claim 13, Liu et al in view of Anirudh et al teaches all of the limitations of claim 1 Liu et al in view of Anirudh et al fails to teach wherein the latent parameters of the generative model follow a Gaussian distribution. Mandlekar et al teaches wherein the latent parameters of the generative model follow a Gaussian distribution (a conditional generative model that is trained on pairs of current and future observations (s.sub.t, s.sub.t+T) sampled from trajectories in the dataset (lines 5-7 in Algorithm 1). An encoder maps a current and future observation to the parameters of a latent Gaussian distribution (paragraph 0085) Therefore, it would have been obvious to a person of ordinary skill in the art to modify Liu et al in view of Anirudh et al to include: wherein the latent parameters of the generative model follow a Gaussian distribution. The reason for doing so would be to train models to accurately analyze medical images. Allowable Subject Matter Claims 5, 14, 27 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL L BURLESON whose telephone number is (571)272-7460. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on 571 270-3438 The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Michael Burleson Patent Examiner Art Unit 2683 Michael Burleson February 7, 2026 /MICHAEL BURLESON/ /AKWASI M SARPONG/SPE, Art Unit 2681 2/11/26
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Feb 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603965
PRINTING DEVICE SETTING EXPANDED REGION AND GENERATING PATCH CHART PRINT DATA BASED ON PIXELS IN EXPANDED REGION
2y 5m to grant Granted Apr 14, 2026
Patent 12585826
DOCUMENT AUTHENTICATION USING ELECTROMAGNETIC SOURCES AND SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12566125
SEQUENCER FOCUS QUALITY METRICS AND FOCUS TRACKING FOR PERIODICALLY PATTERNED SURFACES
2y 5m to grant Granted Mar 03, 2026
Patent 12561548
SYSTEM SIMULATING A DECISIONAL PROCESS IN A MAMMAL BRAIN ABOUT MOTIONS OF A VISUALLY OBSERVED BODY
2y 5m to grant Granted Feb 24, 2026
Patent 12562549
LIGHT EMITTING ELEMENT, LIGHT SOURCE DEVICE, DISPLAY DEVICE, HEAD-MOUNTED DISPLAY, AND BIOLOGICAL INFORMATION ACQUISITION APPARATUS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
68%
With Interview (-6.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 489 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month