Prosecution Insights
Last updated: April 19, 2026
Application No. 18/648,959

QUANTITATIVE ULTRASOUND MEDICAL IMAGING ENHANCED BY INTERVENING TISSUE DETERMINATION

Non-Final OA §101§103§112
Filed
Apr 29, 2024
Examiner
GROSS, JASON PATRICK
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Siemens Healthcare
OA Round
2 (Non-Final)
64%
Grant Probability
Moderate
2-3
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
9 granted / 14 resolved
-5.7% vs TC avg
Strong +62% interview lift
Without
With
+62.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
22.2%
-17.8% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . STATUS OF CLAIMS Claims 1 and 3 have been amended. Claims 14-21 have been cancelled. Claims 1-13 are pending. Non-Final Office Action Applicant’s arguments with respect to the previous rejections of claims 1-13 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. New grounds of rejection are made as set forth below. Claim Objections Claim 3 is objected to because of the following informalities: Claim 3 recites “measurement, and wherein determining comprises determining wherein the machine-learned model is configured….” Examiner suggests amending as follows: “measurement, wherein the machine-learned model is configured….” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “determining an ultrasound derived fat fraction (UDFF) of the liver by a first machine-learned model configured to receive the measurement of the tissue and information from the scanning and output the UDFF….” Claim 1 is indefinite because UDFF is exclusively associated with Siemens and determined by a proprietary algorithm. See, e.g., Kubale, Reinhard, et al. “Ultrasound-derived fat fraction for hepatic steatosis assessment: prospective study of agreement with MRI PDFF and sources of variability in a heterogeneous population.” American Journal of Roentgenology 222.6 (2024): e2330775: “One commercially available quantitative ultrasound tool is the ultrasound-derived fat fraction (UDFF).” (p.2, left column, 3rd paragraph). “Three coauthors (G.T., A.G.,and Y.L.) were employees of Siemens Healthineers Ultrasound Division. These three individuals contributed to study design; additionally, Y.L. invented the UDFF sequence.” (p.2, left column, Methods). See also Baillie, Michele, et al. “Ultrasound Derived Fat Fraction (UDFF).” (2021): “Following BSC estimation, the value for the UDFF index is finally calculated from the BSC of the tissue sample at 3 MHz using a unique proprietary mathematical algorithm.” (p.13, Calculating UDFF). Without knowing the algorithm for determining UDFF, one having ordinary skill in the art could not determine the scope of the claim with reasonable certainty. Moreover, it would be impossible for one having ordinary skill in the art to know whether a method is infringing the claim as Siemens controls the UDFF algorithm. (MPEP 2173: “The primary purpose of this requirement of definiteness of claim language is to ensure that the scope of the claims is clear so the public is informed of the boundaries of what constitutes infringement of the patent.”). In addition to the above, claim 1 recites that a machine-learned model receives the measurement and the information. However, claim 1 does not specify how or even if the measurement and the information determine the outputted UDFF. More specifically, claim 1 does not require that the inputs that are received by the machine-learned model (i.e., measurement and information) are the inputs that ultimately determine the outputted UDFF. Instead, the machine-learned model could use the measurement and information to determine other values or parameters and use those values or parameters as the inputs that are ultimately used to determine the UDFF. Without knowing how the machine-learned model is trained and what inputs determine the output, one having ordinary skill in the art could not understand the scope of the claim with reasonable certainty. Furthermore, the terms “measurement” and “information” are used so broadly in the specification that it is not clear what limits are made upon the ultimate inputs of the machine-learned model. While breadth is not to be equated with indefiniteness (MPEP 2173.04), each of these terms overlaps with one another and can be construed very broadly. With respect to the intervening tissue, “[t]he measurement is of the types of tissue, number of layers, thickness of each layer, acoustic characteristics of each layer, and/or other information layer-by-layer.” ([0040]). With respect to the ROI, “[t]he information from the ROI is the information from the liver used to calculate UDFF or other quantification (e.g., elasticity). The information from the intervening tissue is information that accounts for losses and/or wave distortions caused by the tissue, which may influence the quantification in the ROI.” ([0048]). Applicant’s disclosure even equates the meanings of measurement and information. “[T]he image processor 340 is configured by the machine-learned model 355 to output the quantification in response to input of the information from the ROI and information from intervening tissue.” ([0070]). In addition to the above, the term “tissue” could refer to all tissue between the liver and the transducer or could refer to a single layer of muscle or fat between the liver and the transducer. To illustrate the indefiniteness, the measurement could be a measured distance from the transducer to the liver or a particular parameter of a single layer of muscle and/or fat calculated by a formula using ultrasound data relating to the intervening tissue, the information of the ROI could be any quantification of the liver, and neither the measurement nor the information could be the input that determines the outputted UDFF. Combining these two very broad and overlapping meanings with the fact that the claim does not require that the “measurement” and “information” be the inputs that determine the outputted UDFF, a person having ordinary skill in the art could not determine the scope of the claim with reasonable certainty. Claims 2-13 depend directly or indirectly from claim 1 and, as such, are also indefinite. Further clarification is needed to overcome the rejection. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-13 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one having ordinary skill in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. For the similar reasons as discussed above, claims 1-13 are non-enabling. Without knowing the “unique proprietary mathematical algorithm” to calculate UDFF, one having ordinary skill in the art would not know how to make and/or use the invention. Likewise, the indefiniteness caused by the meanings of “measurement” and “information” combined with their possibly indirect use of the machine-learned model, one having ordinary skill in the art would not know how to make and/or use the invention. RESPONSE TO APPLICANT’S ARGUMENTS The prior 112(b) rejection of claim 3 has been withdrawn and a new 112(a) and (b) rejections have been made as discussed above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite: determining an ultrasound derived fat fraction (UDFF) of the liver using a first machine-learned model configured to receive the measurement of the tissue and information from the scanning and output the UDFF. The determining operation, as drafted and under its broadest reasonable interpretation, recites a mental process and/or a mathematical concept. It recites a mental process because the determining operation can be performed in the human mind. (see MPEP § 2106.04(a)(2)(III)). Examples of mental processes include “observations, evaluations, judgments, and opinions.” (Id). In this case, evaluating ultrasound image data (e.g., in which the evaluation is based on measurements of the intervening tissue and information from scanning, such as image data) can be a human cognitive action that has been performed for decades. The fact that a “machine-learned model” is used does not negate the judicial exception. (see MPEP § 2106.04(a)(2)(III) (explaining that claims requiring a computer still may recite a mental process) (see also the recently decided Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437, Federal Circuit, 18 April 2025: “[C]laims that do no more than apply established methods of machine learning to a new data environment” are not patent eligible). The determining operation also recites a mathematical concept because determining a UDFF of the liver may involve mathematical calculations that are based on attenuation and backscatter from ultrasound data. (see, e.g., the functions that used to calculate UDFF in [0056] and [0057] of LABYED ‘323 discussed below). Again, the fact that a “machine-learned model” is used does not negate the judicial exception. (see MPEP § 2106.04(a)(2)(III) (explaining that claims requiring a computer still may recite a mental process). See CyberSource Corp. v. Retail Decisions, Inc.,, 654 F.3d at 1376, 99 USPQ2d at 1373 (“[C]omputational methods which can be performed entirely in the human mind are the types of methods that embody the ‘basic tools of scientific and technological work”‘). This judicial exception is not integrated into a practical application because they add insignificant extra-solution activity. In particular, the additional elements include measuring, by the ultrasound scanner, tissue between a liver and a transducer of the ultrasound scanner and scanning, by the ultrasound scanner, a region of interest in the liver. However, the measuring and scanning operations are pre-solution activity that must occur in order to determine the UDFF (i.e., to perform the judicial exception). Moreover, each of these generically recites a step of quantitative ultrasound (i.e., measuring a property of the tissue). The ultrasound scanner is also generically recited without any meaningful limitations to its scope. The additional elements also include displaying the ultrasound derived fat fraction. However, this is merely post-solution activity (i.e., displaying a result of the judicial exception). Accordingly, each of the additional elements adds insignificant extra-solution activity to the judication exception. (see MPEP § 2106.05(g)). Moreover, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the additional elements generically recite collecting data and then, after performing the judicial exception, displaying a result. The additional elements do not meaningfully limit the abstract idea. Accordingly, the claims essentially recite a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016). Dependent claims 2-13 do not render the subject matter patent eligible. While claim 2 does specify that at least one measurement includes a thickness, the claim does not specify if the thickness is of a particular tissue. It is insignificant pre-solution activity because measuring a thickness of tissue is commonly done in many ultrasound protocols where the propagation path goes through tissue. With respect to claim 3, measuring a backscatter coefficient and/or attenuation of the tissue does not integrate the judicial exception into a practical application or meaningfully limit the claim. Backscatter coefficient and/or attenuation is measured in many quantitative ultrasound processes (see, e.g., FERRAIOLI and KRISHNAN discussed with respect to the Section 103 rejections). It is insignificant pre-solution activity. As to the latter recitation of claim 3 (and in light of the Section 112 rejection), knowing the location of measurements is necessary when calculating parameters. (see, e.g., discussion regarding depth in FERRAIOLI and KRISHNAN). Claim 4 recites measuring a location along the propagation path and a graphical user interface object. Both limitations (i.e., measuring tissue that the propagation path must go through) and positioning graphical objects over the B-mode image are well-known and performed in many ultrasound protocols. (see, e.g., Section 103 rejection based on ALI below). Furthermore, as discussed above, the step of measuring tissue is insignificant pre-solution activity. Placing graphical objects on landmarks does not meaningfully limit the claims. Claims 5 and 6 recite limitations to the step of measuring. However, both are generically recited in the context of ultrasound imaging. For example, claim 5 recites measuring an acoustic property of the tissue. Ultrasound is fundamentally based upon acoustic properties. With respect to claim 6, identifying different tissue layers of the tissue and wherein the measurement is derived from the different tissue layers is known. (see, e.g., rejection of claim 6). Claim 7 does not meaningfully limit the claim as all measurements along the propagation path of the tissue would likely be considered for the UDFF determination. Claim 7 recites that “characteristic” of each layer is input but, as described in Applicant’s specification, the term has a very broad meaning. As such, it does not meaningfully limit the judicial exception. Claim 8 recites that the machine-learned model is configured by training to account for losses and/or wave distortions caused by the tissue. However, as best understood by the examiner, UDFF measurements inherently account for losses and/or wave distortions caused by tissue. As such, it would be necessary for the machine-learned model to consider them. As such, it does not meaningfully limit the judicial exception. Claim 9 recites that a location of a liver capsule is automatically detected and graphical objects are automatically placed on the image. However, automatic detection is known (see discussion regarding Section 103 rejection of claim 9). Moreover, automating processes that have historically been performed by humans does not meaningfully limit the abstract idea. Claim 10 recites examining a field of view by a second machine-learned model and outputting guidance to position the transducer to image the liver based on output of the second machine-learned model. Claim 9 recites a judicial exception (i.e., mental process) of judging the field of view to determine whether a position of a transducer should be changed. Claim 11 recites that examining includes examining for shadows and/or vessels, wherein the guidance reduces the shadows and/or vessels in the field of view. However, as best understood by the examiner, examining for shadows and/or vessels is what must be done in order to acquire sufficient data. As such, it does not meaningfully limit the judicial exception. Claim 12 recites that the examining comprises scoring the field of view for automated placement of the region of interest. However, scoring a field of view recites another mental process (e.g., mathematical concept) and does not integrate the judicial exceptions into a practical application. Claim 13 recites determining the UDFF as a field of UDFF values distributed in a region of interest in the liver. This recites a mental process (e.g., mathematical concept) and does not integrate the judicial exceptions into a practical application. Furthermore, displaying an ultrasound image with the region of interest coded by the UDFF values is insignificant extra-solution activity. Accordingly, none of the claims are patent ineligible. RESPONSE TO APPLICANT’S ARGUMENTS With respect to the Section 101 rejection, Applicant relies upon a conclusory statement that the claimed method does fall into one of the three groups. “Instead, claim 1 is directed to ultrasound-based measurement in a particular way of a patient’s liver by an ultrasound scanner.” Examiner disagrees. As discussed above, the determining operation recites a mental process and a mathematical concept. Furthermore, it is not clear how claim 1 is directed to an ultrasound-based measurement in a “particular” way. As discussed above with respect to the Section 112(a) and (b) rejections, the claim lacks meaningful limitations as to what measurements/information are acquired and how the UDFF is determined. With respect to the judicial exception being integrated into a practical application, Applicant argues “In particular, measurements of intervening tissue relative to the liver and transducer as well as scanning a region in the liver by an ultrasound scanner, which scanning and measurements cannot be done by a mental process, are used to determine, by a machined-learned model, diagnostically useful liver information. Claim 1 is directed to a new way to determine ultrasound-derived fat fraction with an ultrasound scanner for diagnostically useful display.” Examiner disagrees. Whether the method is new is not dispositive with respect to the issue of patent eligible subject matter. As explained above, the claims recite a mental process and a mathematical concept. Moreover, the claims are recited at a high level without specificity. (MPEP 2106.04(d)). For example, “measurement” can be interpreted broadly to include physical distances or calculated parameters, such as AC and BSC. Likewise, information could be any ultrasound data or parameters calculated from the ultrasound data, such as AC, BSC, or other parameters. As another example, the tissue that is measured could be all tissue between the liver and the transducer or just one layer of tissue (e.g., muscle, fat, skin). Lastly, the machine-learned model is configured to receive the measurement of the tissue and information from the scanning. It is not clear how the measurement and the information determine the outputted UDFF as the claim does not recite how the model is trained. (see the recently decided Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437, Federal Circuit, 18 April 2025: “[C]laims that do no more than apply established methods of machine learning to a new data environment” are not patent eligible). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. Publ. No. 2018/0289323 A1 to Labyed (hereinafter “LABYED ‘323”) and Ferraioli, Giovanna, et al. “Liver fat quantification with ultrasound: depth dependence of attenuation coefficient.” Journal of Ultrasound in Medicine 42.10 (2023): 2247-2255. (hereinafter “FERRAIOLI”) and/or Krishnan KB, Nagaraj N, Singhal N, Thapar S, Yadav K. A two-parameter model for ultrasonic tissue characterization with harmonic imaging. arXiv preprint arXiv:1712.03495. 2017 Dec 10. (hereinafter “KRISHNAN”). With respect to claim 1 (and in light of the Section 112(a) and (b) rejections), LABYED ‘323 teaches a method for ultrasound imaging with an ultrasound scanner. LABYED ‘323 teaches embodiments for ultrasound imaging that measure a tissue property, such as liver fat fraction. ([0002]). LABYED ‘323 teaches: scanning, by the ultrasound scanner, a region of interest in the liver. “A medical diagnostic ultrasound scanner performs the measurements by acoustically generating the waves and measuring the responses.” ([0016]). “[T]he tissue property is indicated in a region of interest (sub-part of the field of view) or over the entire field of view.” ([0062]). determining an ultrasound derived fat fraction (UDFF) of the liver by a first machine-learned model configured to receive…information from the scanning and output the UDFF. LABYED ‘323 teaches a “machine-learnt classifier” that estimates a tissue property. (see, e.g., [0049]). “Any machine learning and resulting machine-learnt classifier may be used.” ([0049]; see also [0050]-[0053]). The tissue property may be UDFF. “Any tissue property may be estimated. For example, the fat fraction of tissue is estimated.” ([0047]). Fat fraction of tissue includes ultrasound-derived fat fraction (UDFF). (see, e.g., [0055]). The classifier receives input values of the ultrasound parameters and outputs the UDFF. “The percentage of fat is used as the ground truth so that the machine learning learns to classify the percentage of fat from input values for the ultrasound parameters.” ([0050]; see also [0051], [0052], and [0078]). displaying the ultrasound derived fat fraction. “In act 38, the ultrasound scanner or a display device displays the estimated tissue parameter. For example, an image of the fat fraction is generated.” ([0057]). LABYED ‘323 does not explicitly teach measuring, by the ultrasound scanner, tissue between a liver and a transducer of the ultrasound scanner and that the first machine-learned model also receives the measurement of the tissue in addition to the information from the scanning. Nonetheless, LABYED ‘323 does teach that the machine-learning model (i.e., classifier) may be trained using other measurements or information. (see, e.g., [0046] and [0050]). More specifically, LABYED ‘323 teaches that any information about the patient may be used for estimating the tissue property. ([0046]). “For example, clinical information for the patient is used. The clinical information may be the medical history, age, body-mass index, sex, fasting or not, blood pressure, diabetic or not, and/or a blood biomarker measure…Any information about the patient may be included.” ([0046]). Likewise, “[o]ther sources of ground truth may be used for a given tissue property, such as from biopsies, modeling, or other measurements.” ([0050]). Moreover, LABYED ‘323 also teaches that the attenuation coefficient, one of the parameters of UDFF, is a function of depth. ([0025]). In the same field of endeavor, FERRAIOLI conducted a study that estimated the influence of various liver depths on the attenuation coefficient of various QUS vendors. (Abstract). FERRAIOLI teaches that the attenuation coefficient, one of the parameters use to determine UDFF, is depth dependent. (Abstract). FERRAIOLI studied three different systems by Canon, Philips, and Siemens. (p.2248, right column). For each system, FERRAIOLI measured the skin-to-liver capsule distance (SCD), which is a measurement that includes the tissue between the liver and the ultrasound transducer. FERRAIOLI found that “[t]he results of this study show that the AC values depend on the depth of the measurement and that there is a progressive decrease of the values that is directly related to the depth. This finding is of utmost relevance because thresholds for detecting and grading liver steatosis might vary depending on the ROI’s depth for AC measurements.” (p.2253, Discussion). FERRAIOLI also taught that the ROI should be close to the “elevational focus” of the transducer. “ROI close to elevational focus should likely be the default setting because AC is overestimated at depths lower than elevational focus and underestimated at depths higher than elevational focus.” (p.2254, right column). FERRAIOLI noted that in one earlier study of a Canon system the AC measurements with the highest repeatability were acquired by a transducer with an elevational focus that coincided with a center of the ROI. This occurred because the recommended placement of the ROI below the liver capsule (i.e., 2 cm below capsule) plus the mean thickness of the subcutaneous tissue was about equal to the elevational focus. In other words, for the ROI to be close to the elevational focus as recommended by FERRAIOLI, one should consider the depth below the liver capsule and the thickness of the subcutaneous tissue. FERRAIOLI also found that the skin-to-liver capsule distance (SCD) affected the AC values for two systems that had about the same SCD, including Siemens, and that “[b]ody mass index” and “waist circumference” were strongly correlated to the skin-to-liver capsule distance. (p.2252, right column). Both BMI and waist circumference are associated with a thicker subcutaneous layer and non-alcoholic fatty liver. Accordingly, FERRAIOLI teaches that AC measurements depend upon the depth and size of ROI and that a thickness of the subcutaneous tissue correlates to BMI and waist circumference and can affect AC measurements. In the same field of endeavor, KRISHNAN teaches a method that has been “approximated and generalized to estimate AC and BSC for tissue layer underlying a more attenuative subcutaneous layer.” (Abstract). KRISHNAN proposes a model that is based, in part, on the attenuation coefficient and the backscatter coefficient of the subcutaneous layer. “[T]he attenuation of the ultrasound beam through the subcutaneous layer of thickness 𝛿 in the Sample tissue where ∝𝑓 (in units of Neper/cm) is the attenuation coefficient of the layer. The backscatter coefficient of the subcutaneous layer is assumed to be 𝐵𝑆𝐶𝑆, the same as that of the Sample tissue.” (p.7, 3.1 Extension of Basic Model to Practice). KRISHNAN notes that errors may occur in the model at larger thicknesses of the subcutaneous layer. “However, errors could accrue at larger distal depths 𝑧2 when 𝛼𝑆>𝛼𝑅 with increase in the thickness of the subcutaneous layer 𝛿.” (p.8, prior to 3.2). More specifically, the attenuation of the sample tissue is greater than the attenuation of the reference tissue due to the greater depth. KRISHNAN also teaches that the thickness can be determined by viewing the image produced by the ultrasound scanner. (p.16, bottom). However, KRISHNAN notes that the identification of the subcutaneous layer can be “automated and improved by incorporating learning-based methods.” (p.22). It would have been obvious to one having ordinary skill in the art to modify the ML model of LABYED ‘323 to account for the tissue between a liver and a transducer of the ultrasound scanner such that the ML model received a measurement of the tissue in addition to the information from the scanning. LABYED ‘323 teaches that other information or measurements may be considered by the machine-learning model and also teaches that the attenuation coefficient, one of the parameters of UDFF, is a function of depth. ([0025]). FERRAIOLI and KRISHNAN confirm that the subcutaneous layer can affect AC measurements and KRISHNAN teaches that the subcutaneous layer can affect the BSC measurements. UDFF is based upon AC and BSC measurements. Accordingly, it would have been obvious to one having ordinary skill in the art to use a trained machine-learning model that outputs a UDFF in response to receiving an input measurement of the subcutaneous layer (e.g., at least subcutaneous layer thickness, AC measurement, and/or BSC measurement) and information from scanning the liver. One would have been motivated to use such a machine-learned model to correct or otherwise account for the affect that the intervening tissue has on liver quantification in order to obtain a more accurate UDFF. There would have been a reasonable expectation of success as LABYED ‘323 teaches that machine-learned models can be trained using various information. NOTE: FERRAIOLI was published in 2023 and appears to include information that was provided by Siemens. Even if an inventor provided the information, the above analysis is still applicable because FERRAIOLI studied two other systems in addition to the Siemens system. Examiner also notes it is known that the output of a third system, FibroScan by Echosens, is improved when an adjustment is made based on the skin-capsular distance (SCD). (Kimura, Syunichiro, et al. “Effect of skin capsular distance on controlled attenuation parameter for diagnosing liver steatosis in patients with nonalcoholic fatty liver disease.” Scientific reports 11.1 (2021): See, e.g., Abstract: “Adjustment of the CAP using the SCD improves the diagnostic performance of the CAP in NAFLD.”). Regardless, KRISHNAN’s description of the effect that the subcutaneous layer has on measurements is still applicable and further supports using a measurement of the intervening tissue as an input for the machine-learning model. With respect to claim 2, LABYED ‘323 does not explicitly teach that measuring comprises measuring a thickness as the measurement. However, LABYED ‘323 teaches that BMI is a clinical factor that may be considered and FERRAIOLI teaches measuring a skin-to-capsule distance, which is correlated with BMI. While FERRAIOLI’s skin-to-liver capsule distance could be interpreted as a thickness, KRISHNAN also specifically teaches extracting the thickness of the subcutaneous fat layer based on images from the ultrasound scanner. (See bottom of p.16 and Figure 7). KRISHNAN teaches determining this thickness in order to account for the attenuation caused by the subcutaneous fat. (see, e.g., Abstract). It would have been obvious to one having ordinary skill in the art to include measure a thickness of an intervening tissue. LABYED ‘323 teaches that other information or measurements may be considered, including BMI which is associated with subcutaneous fat. FERRAIOLI and KRISHNAN confirm that the subcutaneous layer can affect AC measurements and KRISHNAN teaches that the subcutaneous layer can affect the BSC measurements. UDFF is based on AC and BSC measurements. One would have been motivated to configure the machine-learning model to receive the thickness as the measurement in order to output a more accurate UDFF. There would have been a reasonable expectation of success as LABYED ‘323 teaches that machine-learned models can be trained using various information. With respect to claim 3, LABYED ‘323 does not explicitly teach that measuring comprises measuring a backscatter coefficient and/or attenuation of the tissue as the measurement or that the determining comprises the machined-learned model receiving the measurement of the tissue as the backscatter coefficient and/or attenuation of the tissue. However, LABYED ‘323 explicitly teaches that the UDFF of the liver may be calculated using the attenuation coefficient and the backscatter coefficient. (see, e.g., [0056]). In the same field of endeavor, KRISHNAN teaches configuring the model to consider the AC and the BSC of the subcutaneous fat layer in order to more accurately predict disease progression. (Abstract; see also “3.1 Extension of Basic Model to Practice” on p.7 in which the model is adapted based on the attenuation coefficient and the backscatter coefficient of the subcutaneous layer.) It would have been obvious to one having ordinary skill in the art to include a measurement of a backscatter coefficient and/or attenuation coefficient of the intervening tissue and provide the backscatter coefficient and/or attenuation coefficient of the tissue to the machine-learned model. LABYED ‘323 teaches that other information or measurements may be considered. KRISHNAN confirm that the subcutaneous layer can affect the determination of disease progression of the liver and that at least one of the AC or the BSC can affect that determination. One would have been motivated to configure the machine-learning model to receive the backscatter coefficient and/or attenuation of the tissue in order to output a more accurate UDFF. There would have been a reasonable expectation of success as LABYED ‘323 teaches that machine-learned models can be trained using various information. With respect to claim 4, LABYED ‘323 in view of FERRAIOLI and KRISHNAN teach that the measuring comprises measuring the tissue between a liver capsule of the liver and the transducer. As discussed above in the rejection of claim 1, KRISHNAN teaches measuring the thickness of the subcutaneous fat layer. FERRAIOLI also teaches that the liver capsule is indicated by an indicator on a display. Figure 1C on p.2249 of FERRAIOLI shows a horizontal line at the liver capsule. (Horizontal line is even with the cross-hairs to the left of the image.) NOTE: While Figure 1C is shown in FERRAIOLI and possibly provided by the inventor, the image is consistent with other images that were publicly available before 2023. For example, the previously issued Office Action relied upon Figure 3a of GAO, which shows an indicator for the liver capsule. GAO used Siemens’s Sequoia system, which is the same system used in FERRAIOLI. It would have been obvious to one having ordinary skill in the art to indicate the liver capsule on a display as illustrated in FERRAIOLI and/or GAO. One would have been motivated to include the indicator to guide and assure the technician that the ROI is properly positioned away from the liver capsule while acquiring measurements. There would have been a reasonable expectation of success as FERRAIOLI and/or GAO teach that user displays can include such indicators. With respect to claim 5, LABYED ‘323 does not explicitly teach that the measuring comprises measuring an acoustic property based on a type of the tissue as the measurement. However, as explained above with respect to claim 1 and claim 3, KRISHNAN teaches configuring the model to consider the AC and the BSC of the subcutaneous fat layer in order to more accurately predict disease progression. (Abstract; see also “3.1 Extension of Basic Model to Practice” on p.7 in which the model is adapted based on the attenuation coefficient and the backscatter coefficient of the subcutaneous layer.) It would have been obvious to one having ordinary skill in the art to include measuring at least one acoustic property of the subcutaneous fat layer. KRISHNAN confirms that the subcutaneous layer can affect the determination of disease progression of the liver and that the thickness and at least one of the AC or the BSC can affect that determination. One would have been motivated to train the machine-learning model to account for the thickness and/or an acoustic property of the subcutaneous fat layer when outputting a UDFF because considering how the intervening tissue affects the ultrasound signal would provide a more accurate UDFF as taught by KRISHNAN. With respect to claim 8, the combination of LABYED ‘323, FERRAIOLI, and KRISHNAN teach that the machine-learned model is configured by training to account for losses and/or wave distortions caused by the tissue. As explained above with respect to claim 1 and claim 3, KRISHNAN teaches configuring the model to consider the AC and the BSC of the subcutaneous fat layer in order to more accurately predict disease progression. A machine-learned model trained to account for the thickness of the subcutaneous fat layer and AC and/or BSC measurements when outputting a UDFF would necessarily account for the losses and/or wave distortions caused by the tissue. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. Publ. No. 2018/0289323 A1 to Labyed (hereinafter “LABYED ‘323”) and Ferraioli, Giovanna, et al. “Liver fat quantification with ultrasound: depth dependence of attenuation coefficient.” Journal of Ultrasound in Medicine 42.10 (2023): 2247-2255. (hereinafter “FERRAIOLI”) and/or Krishnan KB, Nagaraj N, Singhal N, Thapar S, Yadav K. A two-parameter model for ultrasonic tissue characterization with harmonic imaging. arXiv preprint arXiv:1712.03495. 2017 Dec 10. (hereinafter “KRISHNAN”) as applied to claim 1 above, and further in view of Wear et al. “US backscatter for liver fat quantification: an AIUM-RSNA QIBA pulse-echo quantitative ultrasound initiative.” Radiology 305.3 (2022): 526-537 (hereinafter “WEAR”). With respect to claim 6, KRISHNAN teaches that the measuring comprises identifying different tissue layers of the tissue because identifying the subcutaneous fat layer necessarily distinguishes it from different tissue layers. However, none of LABYED ‘323, FERRAIOLI, or KRISHNAN explicitly teach that the measurement is derived from the different tissue layers. In the same field of endeavor, WEAR is a review article that “explains the science and clinical evidence underlying backscatter for liver fat assessment. Recommendations for data collection are discussed, with the aim of minimizing potential confounding effects associated with technical and biologic variables.” (emphasis added) (Abstract). Notably, WEAR concerns measuring a backscatter coefficient (BSC), which is one of the parameters on which UDFF calculations are based. “Calculation of BSC requires compensation for the total attenuation of US by all intervening tissues between the body surface and the deepest point in the ROI in the liver.” (emphasis added) (p.534, right column, first paragraph of Section entitled “Compensation for Attenuation…”). One approach includes compensating for each individual tissue of these intervening tissues. “This approach relies on identification and measurement of tissue layers (i.e., skin, muscle, fat, and liver) in the propagation path to the ROI.” (emphasis added) (Id., following paragraph). For this approach, “[a]n AC value at each BSC measurement frequency is assigned to each tissue using representative values in the literature.” (Id). WEAR teaches that “[c]alculation of BSC requires compensation for the total attenuation of US by all intervening tissues between the body surface and the deepest point in the ROI in the liver.” (p.534, right column, first paragraph of Section entitled “Compensation for Attenuation…”). One approach for compensating “relies on identification and measurement of tissue layers (i.e., skin, muscle, fat, and liver) in the propagation path to the ROI.” (emphasis added) (Id., following paragraph). KRISHNAN already teaches that one should consider the thickness, AC, and BSC of the subcutaneous fat layer. It would have been obvious to one having ordinary skill in the art to identify different tissue layers of the tissue when measuring the intervening tissue and for the measurement to be derived from the different tissue layers. KRISHNAN already teaches distinguishing the subcutaneous fat layer and identifying a thickness and acoustic properties of the fat layer. WEAR teaches that one should consider each of the intervening tissues (e.g. skin, muscle, fat). One having ordinary skill in the art would have been motivated to train the machine-learning model to account for the thickness and/or an acoustic property of each of the intervening tissue layers when outputting a UDFF because considering how the intervening tissues affect the ultrasound signal would provide a more accurate UDFF as taught by KRISHNAN and WEAR. With respect to claim 7 (depending from claim 6), WEAR and KRISHNAN teach wherein a characteristic of each of the different tissue layers is input to the first machine-learned model as the measurement. As discussed above with respect to claim 6, WEAR teaches that “[c]alculation of BSC requires compensation for the total attenuation of US by all intervening tissues between the body surface and the deepest point in the ROI in the liver.” (p.534, right column, first paragraph of Section entitled “Compensation for Attenuation…”). One approach for compensating “relies on identification and measurement of tissue layers (i.e., skin, muscle, fat, and liver) in the propagation path to the ROI.” (emphasis added) (Id., following paragraph). KRISHNAN already teaches that one should consider the thickness, AC, and BSC of the subcutaneous fat layer. It would have been obvious to one having ordinary skill in the art to identify different tissue layers of the tissue. It would have also been obvious to input a characteristic of each of the different tissue layers into the machine-learned model. KRISHNAN already teaches distinguishing the subcutaneous fat layer and identifying a thickness and acoustic properties of the fat layer. WEAR teaches that one should consider each of the intervening tissues (e.g. skin, muscle, fat). One having ordinary skill in the art would have been motivated to train the machine-learning model to account for the thickness and/or an acoustic property of each of the intervening tissue layers when outputting a UDFF because considering how the intervening tissues affect the ultrasound signal would provide a more accurate UDFF as taught by KRISHNAN and WEAR. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. Publ. No. 2018/0289323 A1 to Labyed (hereinafter “LABYED ‘323”) and Ferraioli, Giovanna, et al. “Liver fat quantification with ultrasound: depth dependence of attenuation coefficient.” Journal of Ultrasound in Medicine 42.10 (2023): 2247-2255. (hereinafter “FERRAIOLI”) and/or Krishnan KB, Nagaraj N, Singhal N, Thapar S, Yadav K. A two-parameter model for ultrasonic tissue characterization with harmonic imaging. arXiv preprint arXiv:1712.03495. 2017 Dec 10. (hereinafter “KRISHNAN”) as applied to claim 1 above, and further in view of Gao et al. “Reliability of performing ultrasound derived SWE and fat fraction in adult livers.” Clinical imaging 80 (2021): 424-429 (hereinafter “GAO”) and U.S. Patent Appl. Publ. No. 2024/0398382 A1 (hereinafter “ALI”). With respect to claim 9, none of LABYED ‘323, FERRAIOLI, and KRISHNAN teach the claim limitations. However, in the same field of endeavor, GAO describes a study to test the reproducibility of performing conventional point shear wave elastography (pSWE), auto-pSWE, and ultrasound derived fat fraction (UDFF) in adult livers. (Abstract). “An Acuson Sequoia ultrasound scanner (Siemens Healthineers, Mountain View, CA, USA) equipped with a curvilinear probe (5C1, bandwidth 1.0-5.7 MHz) was used to acquire grayscale imaging and measure ultrasound SWE and fat fraction (UDFF) parameters of the liver.” (p.425, left column). As shown in Figures 3a-c below, GAO teaches placing a line from the transducer through the liver capsule (see vertical line extending through image), an indicator on the liver capsule (overlaying text identifies this line as “liver capsule” in Figure 3a), and the region of interest in the liver along the line (ROI has curved trapezoid-like shape), and wherein determining the UDFF comprise determining the UDFF in the region of interest. “The region of interest (ROI, 3.0 cm × 3.0 cm, laterally by axially) for measuring ultrasound derived fat fraction (UDFF %) is placed in the liver.” (see caption of Figure 3a). PNG media_image1.png 257 713 media_image1.png Greyscale However, GAO does not explicitly teach that the line from the transducer, the indicator of on the liver capsule, or the region of interest in the liver along the line are automatically placed into position. Moreover, GAO does not explicitly teach automatically detecting a location of a liver capsule in an ultrasound image of the liver. In the same field of endeavor, ALI teaches ultrasound imaging techniques for shear-wave elastography. (Abstract). ALI notes that shear-wave elastography can be used to measure or estimate stiffness of the liver, which is one sign of fatty liver disease. The techniques include automatically identifying an area for a region of interest within the segmented region, wherein the region of interest corresponds to a region in the liver for performing shear-wave elastography. (Abstract). In ALI, “[t]he region of interest may be automatically located or location may be facilitated through an automatic process.” ([0023]). The ROI can be based on the location of the liver capsule. ([0023]). In one embodiment, an AI model is trained to automatically identify the liver capsule. ([0049]). “For example, a model may be trained to segment a liver capsule or liver, and then the trained model may be included in ultrasound system 100, for example, as part of signal processor 132 or associated memory.” ([0050]). The ROI can then be identified based on the location of the liver capsule. ([0086]). Figure 14 shows an indictor 1108 (horizontal line) over the liver capsule that is automatically positioned over the liver capsule and update during imaging. (see [0095], [0096] and also [0090]). It would have been obvious to one having ordinary skill in the art to modify the system to automatically detect a location of a liver capsule in an ultrasound image of the liver and automatically place the relevant indicators (i.e., the line from the transducer, the indicator of on the liver capsule, and the region of interest in the liver along the line), as taught in ALI. One having ordinary skill in the art would be motivated to make this modification because automatic placement of the relevant indicators expedites the measuring process for the technician. There would have been a reasonable expectation of success because ALI teaches that a system can be capable of automatically detecting the liver capsule and ROI and positioning indicators with respect to them. Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. Publ. No. 2018/0289323 A1 to Labyed (hereinafter “LABYED ‘323”) and Ferraioli, Giovanna, et al. “Liver fat quantification with ultrasound: depth dependence of attenuation coefficient.” Journal of Ultrasound in Medicine 42.10 (2023): 2247-2255. (hereinafter “FERRAIOLI”) and/or Krishnan KB, Nagaraj N, Singhal N, Thapar S, Yadav K. A two-parameter model for ultrasonic tissue characterization with harmonic imaging. arXiv preprint arXiv:1712.03495. 2017 Dec 10. (hereinafter “KRISHNAN”) as applied to claim 1 above, and further in view of U.S. Patent Appl. Publ. No. 2021/0177373 A1 to Xie et al. (hereinafter referred to as “XIE”). With respect to claim 10, none of LABYED ‘323, FERRAIOLI, and KRISHNAN explicitly teach examining a field of view by a second machine-learned model and outputting guidance to position the transducer to image the liver based on output of the second machine-learned model. In the same field of endeavor, XIE teaches “ultrasound imaging systems and methods for ultrasonically inspecting biological tissue, such as liver and for automatically identifying and acquiring a view suitable for hepatic-renal echo-intensity ratio quantification, using one or more neural networks….” (Abstract). XIE notes that “[u]ltrasound imaging can be used to measure [a quantitative biomarker], but it may be subject to misdiagnoses or classification due to difficulties in achieving the proper frame for measurement purposes.” ([0002]). As such, XIE teaches an intelligent liver scan mode 101 in which “the system may execute one or more sets of instructions for view matching, automated image capture, ROI identification, and echo-intensity ratio quantification.” ([0029]). XIE’s system examines a field of view. “During this process, the system may determine and/or output (e.g., for display to the user) a confidence metric, which is indicative of the live ultrasound image corresponding to a view suitable for H/R ratio quantification….” (Id). This examination may be performed by a machine-learned model. “[T]he system may be trained or otherwise configured to recognize whether the image corresponds to a suitable image view, also referred to as target image view. In some embodiments, the view matching sub-process may be performed or enhanced with one or more machine learning image classification models (block 115).” XIE also teaches outputting guidance to position the transducer to image the liver based on output of the second machine-learned model. “In some example, the system may guide the user in acquiring the appropriate view of the tissue. FIG. 4, panels a-c shows additional graphical displays that may be provided during the AI-assisted liver scan.” (emphasis added) ([0049]). It would have been obvious to one having ordinary skill in the art to modify the system to utilize a machined-learned model for directing the user to capture ultrasound images. One having ordinary skill in the art would be motivated to use a machine-learned model to increase the likelihood that the operator will position the probe accurately and avoid misdiagnoses as taught in XIE. There would have been a reasonable expectation of success because XIE demonstrates that such systems can incorporate AI assistance. With respect to claim 11 (depending from claim 10), none of LABYED ‘323, FERRAIOLI, and KRISHNAN explicitly teach that the examining comprises examining for shadows and/or vessels, wherein the guidance reduces the shadows and/or vessels in the field of view. However, FERRAIOLI teaches avoiding vessels when obtain quality images. “The transducer was positioned in the intercostal space, and measurements were obtained on the best quality image, that is, the one with fewer vessels and the strongest B-mode signal without artifacts.” (p.2248, right column). XIE is consistent with FERRAIOLI. More specifically, XIE teaches providing guidance to the user to locate the ROIs for measurements. ([0031]). After describing what features are suitable, XIE warns that the ROIs should not otherwise be “located in a region prone to imaging artifacts (e.g., too close or overlapping the boundary between the tissues, near or overlapping vessels or other non-uniform bodily structures).” It would have been obvious to one having ordinary skill in the art to modify the system to utilize a machined-learned model to examine for shadows and/or vessels and provide guidance that reduces the shadows and/or vessels in the field of view. One having ordinary skill in the art would be motivated to use a machine-learned model to avoid these regions to increase the likelihood that the operator will position the probe accurately and avoid misdiagnoses as taught in XIE. There would have been a reasonable expectation of success because XIE demonstrates that such systems can incorporate AI assistance. With respect to claim 12 (depending from claim 10), none of LABYED ‘323, FERRAIOLI, and KRISHNAN explicitly teach that the examining comprises scoring the field of view for automated placement of the region of interest. NOTE: The recitation “for automated placement of the region of interest” in intended use that does not require that any structure be automatically placed at a region of interest. However, XIE teaches determining confidence metrics of images in real-time. “During intelligent scan mode, the image data for each acquired frame may be provided to the engine 227 for identification of a suitable view in real-time. As described, the view identification (or view matching) may be performed by a neural network 228, which may include one or any number of stacked, connected or otherwise appropriately arranged networks of artificial neurons. In some examples, the neural network 228 may include a deep convolutional network configured to output, for each input image, a confidence metric (also referred to herein as matching score). The confidence metric (or matching score) may provide an indication of a probability or confidence level that the given image corresponds to the desired or target image view.” ([0050]). It would have been obvious to one having ordinary skill in the art to modify the system to score the field of view for automated placement of the region of interest. One having ordinary skill in the art would be motivated to use a machine-learned model to help a user or system identify locations that have a high confidence metric so that the probe will be positioned accurately and avoid misdiagnoses as taught in XIE. There would have been a reasonable expectation of success because XIE demonstrates that such systems can incorporate AI assistance. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. Publ. No. 2018/0289323 A1 to Labyed (hereinafter “LABYED ‘323”) and Ferraioli, Giovanna, et al. “Liver fat quantification with ultrasound: depth dependence of attenuation coefficient.” Journal of Ultrasound in Medicine 42.10 (2023): 2247-2255. (hereinafter “FERRAIOLI”) and/or Krishnan KB, Nagaraj N, Singhal N, Thapar S, Yadav K. A two-parameter model for ultrasonic tissue characterization with harmonic imaging. arXiv preprint arXiv:1712.03495. 2017 Dec 10. (hereinafter “KRISHNAN”) as applied to claim 1 above, and further in view of U.S. Patent Appl. Publ. No. 2021/0145409 A1 to Labyed (hereinafter referred to as “LABYED ‘409”). With respect to claim 13, none of LABYED ‘323, FERRAIOLI, and KRISHNAN explicitly teach determining the UDFF as a field of UDFF values distributed in a region of interest in the liver, and wherein displaying comprises displaying an ultrasound image with the region of interest coded by the UDFF values. In the same field of endeavor, LABYED ‘409 teaches “[f]or parametric ultrasound imaging with an ultrasound scanner, the values for multiple parameters are determined for tissue of a patient using ultrasound.” (Abstract). “FIG. 2 shows an example with shear wave speed map 20B, fat fraction map 20C, and inflammation map 20D shown separately. A B-mode image 20A is also shown in one of the quadrants. The shear wave speed map 20B, fat fraction map 20C, and inflammation map 20D are shown as color overlays in the region of interest 22 where the rest of the spatial representation of tissue is a repeat of the B-mode image 20A.” ([0036]). “The image represents the spatial distribution for each of the multiple parameters. Different values for a PNG media_image2.png 381 613 media_image2.png Greyscale given parameters may be provided for different locations in the region of interest 22 or across the image.” ([0037]). Accordingly, LABYED ‘265 teaches a field of UDFF values (i.e., different values for a given parameter) distributed in a region of interest (for different locations in the region of interest 22), and wherein displaying comprises displaying an ultrasound image with the region of interest coded by the UDFF values (see Figure 2 in which the region of interest 22 overlays the B-mode image and has different colors based on the values). It would have been obvious to one having ordinary skill in the art to modify the system to show a field of UDFF values (differentiated by color) within the ROI and with the ROI overlaying the ultrasound image as taught in LABYED ‘409. One having ordinary skill in the art would be motivated to this image arrangement in order to convey different types of information to the user. There would have been a reasonable expectation of success because LABYED ‘409 demonstrates that such image arrangements can be implemented. RESPONSE TO APPLICANT’S ARGUMENTS Applicant’s arguments with respect to the previous rejections of claims 1-13 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, new grounds of rejection are made as set forth above. The new grounds of rejection do not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Prior Art of Record The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kimura, Syunichiro, et al. “Effect of skin capsular distance on controlled attenuation parameter for diagnosing liver steatosis in patients with nonalcoholic fatty liver disease.” Scientific reports 11.1 (2021): 15641. (Year: 2021). KIMURA teaches that the attenuation parameter for another liver quantification method is based upon the skin-to-liver capsule distance. WO 2024/213444 A1 teaches that “layered estimated models” improve the accuracy of quantitative ultrasound of the liver and may benefit from “detecting liver capsules.” ([0022]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON P GROSS whose telephone number is (571)272-1386. The examiner can normally be reached Monday-Friday 9:00-5:00CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M. Kozak can be reached at (571) 270-5284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON P GROSS/Examiner, Art Unit 3797 /SERKAN AKAR/Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §101, §103, §112
Dec 01, 2025
Response Filed
Mar 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582472
SYSTEMS FOR DETERMINING SIZE OF KIDNEY STONE
2y 5m to grant Granted Mar 24, 2026
Patent 12514554
PRE-OPERATIVE ULTRASOUND SCANNING SYSTEM FOR PATIENT LIMB EXTENDING THROUGH A RESERVOIR
2y 5m to grant Granted Jan 06, 2026
Patent 12502157
ULTRASOUND SYSTEM HAVING A DISPLAY DEVICE WITH DYNAMIC SCROLL MODE FOR B-MODE AND M-MODE IMAGES
2y 5m to grant Granted Dec 23, 2025
Patent 12453602
ULTRASONIC PUNCTURE GUIDANCE PLANNING SYSTEM BASED ON MULTI-MODAL MEDICAL IMAGE REGISTRATION USING AN ITERATIVE CLOSEST POINT ALGORITHM
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+62.5%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month