Prosecution Insights
Last updated: April 19, 2026
Application No. 18/555,498

DETERMINING A CONFIDENCE INDICATION FOR DEEP LEARNING IMAGE RECONSTRUCTION IN COMPUTED TOMOGRAPHY

Non-Final OA §103
Filed
Oct 13, 2023
Examiner
WAIT, CHRISTOPHER
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Prismatic Sensors AB
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
90%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
303 granted / 399 resolved
+13.9% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
12 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
23.3%
-16.7% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 399 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . By Preliminary Amendment of 10/13/23: Claims 1-23 are currently amended. Claims 24-27 are canceled. Information Disclosure Statement The information disclosure statements (IDS) submitted on 7/25/24 & 4/15/25 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “System is configured to”, “System is further configured to”, ““System is also configured to” in claims 18-23. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over US PG Pub 2019/0325620 to Adler et al., in view of US PG Pub 2017/0270692 to Gronberg et al. Regarding claim 1. Adler discloses a method for determining one or more confidence indications for machine learning image reconstruction in the computed tomography (CT) (Abstract), the method comprising: acquiring x-ray data (“Image processor 114 may communicate with database 124 to read images into memory 116 or store images from memory 116 to database 124. For example, the database 124 may be configured to store a plurality of images (e.g., 3D MRI, 4D MRI, 2D MRI slice images, CT images, 2D Fluoroscopy images, X-ray images, raw data from MR scans or CT scans, Digital Imaging and Communications in Medicine (DIMCOM) data, etc.) that the database 124 received from image acquisition device 132”, paragraph 36); performing material decomposition-based image reconstruction by machine learning image reconstruction to generate at least one reconstructed basis image or image feature thereof based on the acquired x-ray data (“At 302, an initial image reconstruction can be performed upon the imaging data obtained at 300. In an example, the initial image reconstruction can use a technique that can yield an approximation to the conditional mean (e.g., such as using an approach based on statistical or machine learning, or by using a MAP estimator or the like). This initial reconstructed image 304 can be referred to as an “initial image” or as “the mean image”.”, paragraph 57; note “initial image”= “basis image”); processing the x-ray data based on at least one machine learning system to generate a representation of a posterior probability distribution of the at least one reconstructed basis image or image feature thereof (“The present inventors have recognized, among other things, a need to help improve image reconstruction or image segmentation, such as by using a statistical or machine learning technique for sampling a posterior distribution such as for providing uncertainty information associated with a reconstructed image”, paragraph 54); and generating one or more confidence indications for the at least one reconstructed basis image, or at least one derivative image originating from the at least one reconstructed basis image, or image feature of the at least one reconstructed basis image or the at least one derivative image, based on the representation of a posterior probability distribution (“generate posterior distribution simulated images for providing at least one indication of an image error associated with the initial image”, claim 1; “Discriminator 308 uses its convolutional network and the initial image at 304 to distinguish between the posterior distribution simulated images at 312 and the deemed true image 314 for training a statistical learning model for use by the convolutional network of the Generator 306 in then later generating posterior distribution simulated images at 312 at run-time (e.g., after training) such as for determining an image error or uncertainty associated with a subsequently obtained at least one reconstructed initial image at 304”, paragraph 60); wherein the step of generating one or more confidence indications comprises determining an uncertainty or confidence map of individual basis material images and also covariance between different basis material images, allowing the uncertainty or confidence map to be propagated to yield an uncertainty map for a derived image (“At 302, an initial image reconstruction can be performed upon the imaging data obtained at 300. In an example, the initial image reconstruction can use a technique that can yield an approximation to the conditional mean (e.g., such as using an approach based on statistical or machine learning, or by using a MAP estimator or the like). This initial reconstructed image 304 can be referred to as an “initial image” or as “the mean image””, paragraph 57; “FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique”, paragraph 62). Adler while providing for x-rays does not disclose energy-resolved x-rays. However, Gronberg, in the same area of CT scanning, teaches energy-resolved x-rays. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified Adler’s CT scanning to include: energy-resolved x-rays. It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified Adler’s CT scanning by the teaching of Gronberg because of the following reasons: (a) efficient high-quality image reconstruction and/or quantification can be achieved. By way of example, the apparent trade-off between improved tissue quantification and increased image noise can be overcome, (paragraph 38, Gronberg); and (b) to help provide uncertainty information about a mean reconstruction image. Such uncertainty information can be useful to help understand or even visually modify the mean reconstruction image, as taught by Alder in the Abstract. Regarding claim 2. Adler discloses wherein the machine learning image reconstruction is deep learning image reconstruction, and the at least one machine learning system includes at least one neural network (“The present techniques can include using one or more generative models in statistical or machine learning, such as can be combined with other one or more other techniques in statistical or machine learning for solving inverse problems (such as image reconstruction), such as to create a neural network that can allow quick or efficient sampling from the posterior distribution, such as can be useful to assess uncertainty associated with the reconstructed image”, paragraph 10). Regarding claim 3. Adler discloses wherein the representation of a posterior probability distribution includes at least one of a mean variance, a covariance, a standard deviation, a skewness, and a kurtosis (“The above-described Deep Posterior Sampling approach for quantifying uncertainty in image reconstruction can use generative models from machine learning to create random samples si from the probability distribution given by P(x=x|y=y). Using such generated random samples, a wide range of one or more estimators can be evaluated. For example, according to the law of large numbers, the posterior mean can be approximated according to [00002][x|y=y]≈1n.Math..Math.i=1n.Math.si Likewise, the posterior (pointwise) variance is given by [00003][(x-[x|y=y])2|y=y]≈1n.Math..Math.j-1n.Math..Math.(s-1.Math..Math.i=1.Math.si)2 Such samples can also be used to answer one or more questions, such as, for example, “what is the probability of there being a tumor at a particular location?” by checking how commonly the tumor is present in the various generated samples”, paragraph 69). Regarding claim 4. Adler discloses wherein the one or more confidence indications includes an error estimate or measure of statistical uncertainty for at least one point in the at least one reconstructed basis image, and/or an error estimate or measure of statistical uncertainty for at least one image measurement derivable from the at least one reconstructed basis image (“generate posterior distribution simulated images for providing at least one indication of an image error associated with the initial image”, claim 1; “Discriminator 308 uses its convolutional network and the initial image at 304 to distinguish between the posterior distribution simulated images at 312 and the deemed true image 314 for training a statistical learning model for use by the convolutional network of the Generator 306 in then later generating posterior distribution simulated images at 312 at run-time (e.g., after training) such as for determining an image error or uncertainty associated with a subsequently obtained at least one reconstructed initial image at 304”, paragraph 60). Regarding claim 5. Adler discloses wherein the error estimate or measure of statistical uncertainty includes at least one of an upper bound for an error, a lower bound for an error, a standard deviation, a variance, or a mean absolute error (“The above-described Deep Posterior Sampling approach for quantifying uncertainty in image reconstruction can use generative models from machine learning to create random samples si from the probability distribution given by P(x=x|y=y). Using such generated random samples, a wide range of one or more estimators can be evaluated. For example, according to the law of large numbers, the posterior mean can be approximated according to [00002][x|y=y]≈1n.Math..Math.i=1n.Math.si Likewise, the posterior (pointwise) variance is given by [00003][(x-[x|y=y])2|y=y]≈1n.Math..Math.j-1n.Math..Math.(s-1.Math..Math.i=1.Math.si)2 Such samples can also be used to answer one or more questions, such as, for example, “what is the probability of there being a tumor at a particular location?” by checking how commonly the tumor is present in the various generated samples”, paragraph 69). Regarding claim 6. Adler discloses wherein the at least one image measurement comprises at least one of the following a dimensional measure of a feature, an area, a volume, a degree of inhomogeneity, a measure of shape or irregularity, a measure of composition, and a measure of concentration of a substance (“FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B”, paragraph 62). Regarding claim 7. Adler discloses wherein the one or more confidence indications includes one or more uncertainty maps for the at least one reconstructed basis image, or at least one derivative image originating from the at least one reconstructed basis image, or the image feature thereof (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 8. Adler discloses wherein the step of generating one or more confidence indications comprises generating a confidence map for a reconstructed material selective x-ray image for CT (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 9. Adler discloses wherein the confidence map is generated to highlight parts of the reconstructed material selective x-ray image that the machine learning image reconstruction has been able to determine with a confidence level above a given threshold (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 10. Adler discloses wherein the step of generating one or more confidence indications comprises generating, by a neural network taking material concentration maps obtained from deep learning based material decomposition as input, one or more confidence maps (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 11. Adler discloses wherein the step (S2a) of performing material-decomposition-based image reconstruction and/or machine learning image reconstruction comprises generating, by a neural network taking energy bin sinograms as input, the at least one reconstructed basis image or image feature (“At 302, an initial image reconstruction can be performed upon the imaging data obtained at 300. In an example, the initial image reconstruction can use a technique that can yield an approximation to the conditional mean (e.g., such as using an approach based on statistical or machine learning, or by using a MAP estimator or the like). This initial reconstructed image 304 can be referred to as an “initial image” or as “the mean image”.”, paragraph 57; note “initial image”= “basis image”). Regarding claim 12. Adler discloses wherein at least one basis material image is generated together with at least one uncertainty map, wherein the uncertainty map is a representation of an uncertainty or error estimate of the at least one basis material image, and wherein the at least one basis material image and the at least one uncertainty map are presentable to a user as separate images or in combination (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 13. Adler discloses wherein the at least one uncertainty map is presentable as an overlay relative to the at least one basis material image or wherein the at least one uncertainty map is presentable by a distorting filter for the at least one basis material image (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 14. Adler discloses wherein the step of processing the energy resolved x-ray data based on at least one machine learning system to generate a representation of a posterior probability distribution comprises generating, by a neural network, samples of the posterior probability distribution given acquired energy-resolved x-ray data, and wherein the step of generating one or more confidence indications comprises generating an uncertainty map as the standard deviation over a plurality of samples (FIGS. 5A, 5B, 5C, and 5D illustrate examples of how the posterior distribution simulated images at 312 can be used to assess uncertainty information associated with the mean image 304. FIG. 5A shows an example of a deemed true image 314 of an example of an object. FIG. 5B shows an example of a corresponding mean reconstructed image 304, such as can be obtained by applying a statistical learning reconstruction technique, a MAP estimator reconstruction technique, or another reconstruction technique to acquired imaging data 300 of the object. FIG. 5C shows an example of one of several posterior distribution simulated images at 312, such as generated as described herein using the Generator 306 with the mean reconstructed image 304 and the random noise values at 310 as inputs to the Generator 306. Each of the posterior distribution simulated images at 312 provides an alternative representation of the mean reconstruction image 304 shown in FIG. 5B, taking into account a degree of uncertainty associated with the introduction of the random noise values at 310 that were provided as inputs to the Generator 306. For example, by stepping through ones of the posterior distribution simulated images at 312 (an example of one of which is shown in FIG. 5C), a visual representation of the uncertainty associated with the mean reconstructed image 304 can help guide interpretation of the mean reconstructed image 304. For example, if the dark spot within the interior of the deemed true image 314 in FIG. 5A represents a tumor within an object such as a human subject, the various posterior distribution simulated images 312 in FIG. 5C can provide alternative representations accounting for noise-induced uncertainties. This can allow the user to better visually interpret the mean reconstruction image 304 shown in FIG. 5B, such as to assess whether there is a tumor fully enclosed within the interior of the image (such as shown in the true image 314 of FIG. 5A), when the mean reconstruction image 304 such as shown in FIG. 5B is unclear or ambiguous with respect to whether such a feature exists. The collection of posterior distribution simulated images 312 may provide at some samples 312 within the uncertainty distribution that may more clearly hint at the existence of such a feature, even with the ambiguity in the mean reconstruction image 304 shown in FIG. 5B. Regarding claim 15. Adler discloses wherein the step of processing the energy resolved x-ray data based on at least one machine learning system to generate a representation of a posterior probability distribution comprises applying a neural network, implemented as a variational autoencoder, to encode an input data vector into parameters of a probability distribution of a latent random variable (“As explained further herein, the present techniques can be implemented using a generative model that can be conditioned on some input, e.g., using a Conditional Variational Auto-Encoder (CVAE)”, paragraph 11; “the present techniques can be implemented using a generative model that can be conditioned on some input, e.g., using a Conditional Variational Auto-Encoder (CVAE)”,paragraph 68), and extract a collection of posterior samples of the latent random variable from this probability distribution for processing by a corresponding decoder to obtain posterior observations (“The above-described Deep Posterior Sampling approach for quantifying uncertainty in image reconstruction can use generative models from machine learning to create random samples si from the probability distribution given by P(x=x|y=y). Using such generated random samples, a wide range of one or more estimators can be evaluated. For example, according to the law of large numbers, the posterior mean can be approximated according to [00002][x|y=y]≈1n.Math..Math.i=1n.Math.si Likewise, the posterior (pointwise) variance is given by [00003][(x-[x|y=y])2|y=y]≈1n.Math..Math.j-1n.Math..Math.(s-1.Math..Math.i=1.Math.si)2 Such samples can also be used to answer one or more questions, such as, for example, “what is the probability of there being a tumor at a particular location?” by checking how commonly the tumor is present in the various generated samples”, paragraph 69). Regarding claim 16. Adler discloses wherein the step of generating one or more confidence indications comprises generating at least one map of the variance or standard deviation of at least one basis coefficient and/or at least one map of the covariance or correlation coefficient of at least one pair of basis functions associated with the at least one reconstructed basis image (“generate posterior distribution simulated images for providing at least one indication of an image error associated with the initial image”, claim 1; “Discriminator 308 uses its convolutional network and the initial image at 304 to distinguish between the posterior distribution simulated images at 312 and the deemed true image 314 for training a statistical learning model for use by the convolutional network of the Generator 306 in then later generating posterior distribution simulated images at 312 at run-time (e.g., after training) such as for determining an image error or uncertainty associated with a subsequently obtained at least one reconstructed initial image at 304”, paragraph 60). Regarding claim 17. Adler discloses wherein the representation of a posterior probability distribution is specified by the mean and variance of a plurality of image features (“The above-described Deep Posterior Sampling approach for quantifying uncertainty in image reconstruction can use generative models from machine learning to create random samples si from the probability distribution given by P(x=x|y=y). Using such generated random samples, a wide range of one or more estimators can be evaluated. For example, according to the law of large numbers, the posterior mean can be approximated according to [00002][x|y=y]≈1n.Math..Math.i=1n.Math.si Likewise, the posterior (pointwise) variance is given by [00003][(x-[x|y=y])2|y=y]≈1n.Math..Math.j-1n.Math..Math.(s-1.Math..Math.i=1.Math.si)2 Such samples can also be used to answer one or more questions, such as, for example, “what is the probability of there being a tumor at a particular location?” by checking how commonly the tumor is present in the various generated samples”, paragraph 69). Regarding claim 18. Claim 18 is rejected for the same reasons and rational as provided above for claim 1. Regarding claim 19. Claim 19 is rejected for the same reasons and rational as provided above for claim 2. Regarding claim 20. Claim 20 is rejected for the same reasons and rational as provided above for claim 4. Regarding claim 21. Claim 21 is rejected for the same reasons and rational as provided above for Claim 7. Regarding claim 22. Claim 22 is rejected for the same reasons and rational as provided above for Claim 8. Regarding claim 23. Claim 23 is rejected for the same reasons and rational as provided above for Claim 9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US PG Pub 2008/0095305 to Ziegler et al. discloses a computer tomography apparatus (100) for examination of an object of interest (107), the computer tomography apparatus (100) comprising detecting elements (123) adapted to detect electromagnetic radiation coherently scattered from an object of interest (107) in an energy-resolving manner, and a determination unit (118) adapted to determine structural information concerning the object of interest (107) based on a statistical analysis of detecting signals received from the detecting elements (123). Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER D. WAIT, Esq. whose telephone number is (571)270-5976. The examiner can normally be reached Monday-Friday, 9:30- 6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571 270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CHRISTOPHER D. WAIT, Esq. Primary Examiner Art Unit 2683 /CHRISTOPHER WAIT/Primary Examiner, Art Unit 2683
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §103
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597085
Use of Imperfect Patterns to Encode Data on Surfaces
2y 5m to grant Granted Apr 07, 2026
Patent 12591964
COMPUTATIONAL METHOD AND SYSTEM FOR IMPROVED IDENTIFICATION OF BREAST LESIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12590797
METHOD TO REQUALIFY DIE AFTER STORAGE
2y 5m to grant Granted Mar 31, 2026
Patent 12586148
EFFICIENT IMAGE WARPING BASED ON USER INPUT
2y 5m to grant Granted Mar 24, 2026
Patent 12585906
IMAGE FORMING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
90%
With Interview (+13.6%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 399 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month