Prosecution Insights
Last updated: April 19, 2026
Application No. 18/336,834

METHODS AND SYSTEMS FOR PHOTOACOUSTIC COMPUTED TOMOGRAPHY OF BLOOD FLOW

Final Rejection §103
Filed
Jun 16, 2023
Examiner
FRITH, SEAN A
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
California Institute Of Technology
OA Round
4 (Final)
60%
Grant Probability
Moderate
5-6
OA Rounds
3y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
167 granted / 276 resolved
-9.5% vs TC avg
Strong +29% interview lift
Without
With
+28.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
36 currently pending
Career history
312
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
23.9%
-16.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) was submitted on 9/30/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment This action is in response to the remarks filed on 9/30/2025. The amendments filed on 9/30/2025 are entered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “scanning mechanism configured to scan” in claim 15. Examiner notes that dependent claim 18 also recites “scanning mechanism” but includes sufficient structure to not be interpreted under 112(f). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 13, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi et al. (U.S. Pub. No. 20190298304) hereinafter Igarashi, in view of Yoshikawa (U.S. Pub. No. 20210022702) hereinafter Yoshikawa, in further view of Norman et al. (U.S. Pub. No. 20210353439) hereinafter Norman. Regarding claim 1, primary reference Igarashi teaches: A photoacoustic computed tomography method (abstract), comprising: reconstructing a sequence of photoacoustic images based on acoustic data detected by an ultrasonic transducer array ([0175]-[0178], teaches to the time series of photoacoustic imaging (sequence of photoacoustic images) based upon image data from the ultrasound diagnostics of laser light optical excitation; Note that this photoacoustic application of the device is further supported by the teachings of the embodiments of the device as disclosed in [0028]-[0164]); extract at least one blood component from each of the photoacoustic images ([0175]-[0178], vectors expressing moving of the red blood cells form extracted blood components from each of the time series of photoacoustic images); and Primary reference Igarashi fails to particularly teach: applying a spatiotemporal filter to extract at least one blood component within lumen of one or more blood vessels from each photoacoustic images to generate corresponding filtered photoacoustic images of the one or more blood vessels Estimating a blood flow measurement of each pixel of each filtered photoacoustic image However, the analogous art of Yoshikawa of an ultrasonic wave transmission and movement vector calculation system (abstract) teaches: applying a spatiotemporal filter to extract at least one blood component within lumen of one or more blood vessels from each acoustic images to generate corresponding filtered acoustic images of the one or more blood vessels ([0044], blood flow within the blood vessel forms a blood component within lumen of one or more blood vessels; [0045], separation filter 44 forms a spatiotemporal filter for extraction of movement components including blood flow; [0046], blood flow vector distribution; [0047]-[0051]; [0054]-[0057], as depicted in figure 2, screen 210 with extracted image 204b, the blood component within the lumens of one or more blood vessels are extracted and generated as a filtered image for display to the user; [0071]-[0074]; [0075], the separation filter 45 includes separation of tissues for a blood flow component using both “a temporal identity and a spatial identity” which forms a spatiotemporal filter as the blood flow component has unique temporal and spatial identities that enable filtering; [0076]-[0081]; [0082], the vectors related to blood flow are separated using the separation filter 45, which forms an extraction of at least one blood component from the acoustic images; [0083]; [0087], “the movement vector component distributions 204a to 204c extracted as described above on the display unit 60 as shown in the screen 210 (see FIG. 2) (step S29).”) Estimating a blood flow measurement of each pixel of each filtered acoustic image ([0045]; [0046], blood flow vector distribution; [0047]-[0051]; [0071], “the movement amount measurement unit 402 calculates a movement amount of each point (pixel) set vertically and horizontally in an imaging region by using the nine pieces of two-dimensional data acquired at least twice in a time series” the movement amount at each point forms a blood flow measurement at each pixel, when the vectors are calculated for blood flow; [0072]-[0074]; [0075], the separation filter 45 includes separation of tissues for a blood flow component using both “a temporal identity and a spatial identity” which forms a spatiotemporal filter as the blood flow component has unique temporal and spatial identities that enable filtering; [0076]-[0081]; [0082], the vectors related to blood flow are separated using the separation filter 45, which forms an extraction of at least one blood component from the acoustic images and “basically, the separation filter (A) 45 extracts the point of the high rank eigenvalue component from the movement vector distribution” which forms a blood flow measurement at each pixel (point); [0083]; [0087], movement vector distributions forms the estimation of blood flow measurement at each pixel) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi to incorporate the spatiotemporal filter extraction of blood flow measurements at each pixel as taught by Yoshikawa because the varying spatial and temporal properties of movements of blood flow, tissue regions, and periodic physiological movements enable filterable distinctions between each movement type (Yoshikawa, [0045]-[0051]; [0057]). This provides for higher quality extraction of blood components which leads to additional diagnosis information of features such as micro vessels and blood flow dynamics, leading to improved clinical diagnostic outcomes. Primary reference Igarashi further fails to teach: Applying a spatiotemporal filter to the sequence of photoacoustic images to extract at least one blood component within lumen of one or more blood vessels from each photoacoustic image in the sequence of photoacoustic images to generate a corresponding sequence of filtered photoacoustic images of the one or more blood vessels However, the analogous art of Norman of an ultrasound brain imaging and processing system (abstract) teaches: Applying a spatiotemporal filter to the sequence of acoustic images to extract at least one blood component within lumen of one or more blood vessels from each acoustic image in the sequence of acoustic images to generate a corresponding sequence of filtered acoustic images of the one or more blood vessels ([0186], spatiotemporal filter to separate blood echoes from tissue; [0189], “Singular value decomposition (SVD) to discriminate red blood cell motion from tissue motion and extracted the Doppler signal in each ensemble of 250 coherently compounded frames. The resulting images were then stored in a 3D array of 2D images in time series”. The red blood cell motion is extracted across time series frames, which forms a sequence of acoustic images, and the stored resulting images in the 3D array forming a time series is the generated corresponding sequence of filtered acoustic images of the one or more blood vessels; see also [0064], Doppler imaging of blood flow; [0099], “Pre-processing the acquired images further includes performing applying a filtering process to the plurality of ultrasound images to separate tissue movement from red blood cell movement. In one example, a spatiotemporal clutter filtering based on single value decomposition may be employed”; [0117], blood cell signals; [0205] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi and Yoshikawa to incorporate the extraction of a blood component over a sequence of images and generation of a sequence of filtered images as taught by Norman because hemodynamic activity over time can provide additional insight into the activation of regions of interest, leading to additional diagnostic information of a patient’s target region of interest (Norman, [0099]-[0100]). This leads to improved estimations of blood flow change over time, during a real-time procedure, and provides more efficient clinical diagnostics. Regarding claim 13, primary reference Igarashi teaches: A non-transitory computer readable media for generating one or more blood flow maps from acoustic data detected by an ultrasonic transducer array, the non-transitory computer readable media, when read by one or more processors (abstract), is configured to perform one or more operations comprising: reconstructing a sequence of photoacoustic images based acoustic data detected by the ultrasonic transducer array ([0175]-[0178], teaches to the time series of photoacoustic imaging (sequence of photoacoustic images) based upon image data from the ultrasound diagnostics of laser light optical excitation; Note that this photoacoustic application of the device is further supported by the teachings of the embodiments of the device as disclosed in [0028]-[0164]); extract at least one blood component from each photoacoustic image ([0175]-[0178], vectors expressing moving of the red blood cells form extracted blood components from each of the time series of photoacoustic images); and Primary reference Igarashi fails to particularly teach: applying a spatiotemporal filter to extract at least one blood component within lumen of one or more blood vessels from each photoacoustic images to generate corresponding filtered photoacoustic images of the one or more blood vessels Estimating a blood flow measurement of each pixel of each filtered photoacoustic image However, the analogous art of Yoshikawa of an ultrasonic wave transmission and movement vector calculation system (abstract) teaches: applying a spatiotemporal filter to extract at least one blood component within lumen of one or more blood vessels from each acoustic images to generate corresponding filtered acoustic images of the one or more blood vessels ([0044], blood flow within the blood vessel forms a blood component within lumen of one or more blood vessels; [0045], separation filter 44 forms a spatiotemporal filter for extraction of movement components including blood flow; [0046], blood flow vector distribution; [0047]-[0051]; [0054]-[0057], as depicted in figure 2, screen 210 with extracted image 204b, the blood component within the lumens of one or more blood vessels are extracted and generated as a filtered image for display to the user; [0071]-[0074]; [0075], the separation filter 45 includes separation of tissues for a blood flow component using both “a temporal identity and a spatial identity” which forms a spatiotemporal filter as the blood flow component has unique temporal and spatial identities that enable filtering; [0076]-[0081]; [0082], the vectors related to blood flow are separated using the separation filter 45, which forms an extraction of at least one blood component from the acoustic images; [0083]; [0087], “the movement vector component distributions 204a to 204c extracted as described above on the display unit 60 as shown in the screen 210 (see FIG. 2) (step S29).”) Estimating a blood flow measurement of each pixel of each filtered acoustic image ([0045]; [0046], blood flow vector distribution; [0047]-[0051]; [0071], “the movement amount measurement unit 402 calculates a movement amount of each point (pixel) set vertically and horizontally in an imaging region by using the nine pieces of two-dimensional data acquired at least twice in a time series” the movement amount at each point forms a blood flow measurement at each pixel, when the vectors are calculated for blood flow; [0072]-[0074]; [0075], the separation filter 45 includes separation of tissues for a blood flow component using both “a temporal identity and a spatial identity” which forms a spatiotemporal filter as the blood flow component has unique temporal and spatial identities that enable filtering; [0076]-[0081]; [0082], the vectors related to blood flow are separated using the separation filter 45, which forms an extraction of at least one blood component from the acoustic images and “basically, the separation filter (A) 45 extracts the point of the high rank eigenvalue component from the movement vector distribution” which forms a blood flow measurement at each pixel (point); [0083]; [0087], movement vector distributions forms the estimation of blood flow measurement at each pixel) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi to incorporate the spatiotemporal filter extraction of blood flow measurements at each pixel as taught by Yoshikawa because the varying spatial and temporal properties of movements of blood flow, tissue regions, and periodic physiological movements enable filterable distinctions between each movement type (Yoshikawa, [0045]-[0051]; [0057]). This provides for higher quality extraction of blood components which leads to additional diagnosis information of features such as micro vessels and blood flow dynamics, leading to improved clinical diagnostic outcomes. Primary reference Igarashi further fails to teach: Applying a spatiotemporal filter to the sequence of photoacoustic images to extract at least one blood component within lumen of one or more blood vessels from each photoacoustic image in the sequence of photoacoustic images to generate a corresponding sequence of filtered photoacoustic images of the one or more blood vessels However, the analogous art of Norman of an ultrasound brain imaging and processing system (abstract) teaches: Applying a spatiotemporal filter to the sequence of acoustic images to extract at least one blood component within lumen of one or more blood vessels from each acoustic image in the sequence of acoustic images to generate a corresponding sequence of filtered acoustic images of the one or more blood vessels ([0186], spatiotemporal filter to separate blood echoes from tissue; [0189], “Singular value decomposition (SVD) to discriminate red blood cell motion from tissue motion and extracted the Doppler signal in each ensemble of 250 coherently compounded frames. The resulting images were then stored in a 3D array of 2D images in time series”. The red blood cell motion is extracted across time series frames, which forms a sequence of acoustic images, and the stored resulting images in the 3D array forming a time series is the generated corresponding sequence of filtered acoustic images of the one or more blood vessels; see also [0064], Doppler imaging of blood flow; [0099], “Pre-processing the acquired images further includes performing applying a filtering process to the plurality of ultrasound images to separate tissue movement from red blood cell movement. In one example, a spatiotemporal clutter filtering based on single value decomposition may be employed”; [0117], blood cell signals; [0205] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi and Yoshikawa to incorporate the extraction of a blood component over a sequence of images and generation of a sequence of filtered images as taught by Norman because hemodynamic activity over time can provide additional insight into the activation of regions of interest, leading to additional diagnostic information of a patient’s target region of interest (Norman, [0099]-[0100]). This leads to improved estimations of blood flow change over time, during a real-time procedure, and provides more efficient clinical diagnostics. Regarding claim 22, the combined references of Igarashi, Yoshikawa, and Norman teach all of the limitations of claim 1. Primary reference Igarashi further teaches: wherein the blood flow measurements include a blood flow speed and a blood flow direction within a lumen of at least one of the one or more blood vessels being imaged ([0084] and [0086], vector calculation of blood flow based measurements include both a velocity measurement and a direction of the movement, which forms a blood flow speed and direction within the imaged lumen of interest; [0088]-[0089]; [0122], directions and moving velocity values of contrast agent flow, which corresponds with blood flow measurements in a vessel of interest; [0172]-[0175], velocity values and moving directions are determined; [0175]-[0178], teaches to the use of red blood cells for calculations of measurements). Regarding claim 23, the combined references of Igarashi, Yoshikawa, and Norman teach all of the limitations of claim 1. Primary reference Igarashi further fails to teach: wherein the at least one blood component comprises a blood cell. However, the analogous art of Norman of an ultrasound brain imaging and processing system (abstract) teaches: wherein the at least one blood component comprises a blood cell ([0064], movement of red blood cells; [0099], red blood cell movement; [0117], red blood cell signals; [0189], red blood cell motion) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, and Norman to incorporate the extraction of a blood component of red blood cells as taught by Norman because hemodynamic activity over time can provide additional insight into the activation of regions of interest, leading to additional diagnostic information of a patient’s target region of interest (Norman, [0099]-[0100]). Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in further view of Norman as applied to claim 1 above, and further in view of Irisawa et al. (U.S. Pub. No. 20190008484) hereinafter Irisawa. Regarding claim 2, the combined references of Igarashi, Yoshikawa, and Norman teach all of the limitations of claim 1. Primary reference Igarashi further fails to teach: further comprising generating one or more blood flow maps from the blood flow measurements However, the analogous art of Irisawa of an acoustic wave imaging system for displaying tissue regions of interest (abstract) teaches: further comprising generating one or more blood flow maps from the blood flow measurements ([0093], color flow map of blood flow; [0161], color flow map of blood flow in the region of interest). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, and Norman to incorporate the blood flow mapping as taught by Irisawa because it provides the user with visual depictions of the blood measurements such as velocity, which enables intuitive understanding of tissue diagnostics (Irisawa, [0093]; [0161]). This leads to improved clinical outcomes. Regarding claim 3, the combined references of Igarashi, Yoshikawa, Norman, and Irisawa teach all of the limitations of claim 2. Primary reference Igarashi further fails to teach: wherein the one or more blood flow maps comprise one or more of a vector map, a color Doppler map, or a power Doppler map However, the analogous art of Irisawa of an acoustic wave imaging system for displaying tissue regions of interest (abstract) teaches: wherein the one or more blood flow maps comprise one or more of a vector map, a color Doppler map, or a power Doppler map ([0093], color flow map of blood flow is a color Doppler map; [0161], color flow map of blood flow in the region of interest is a color Doppler map). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Irisawa to incorporate the color Doppler blood flow mapping as taught by Irisawa because it provides the user with visual depictions of the blood measurements such as velocity, which enables intuitive understanding of tissue diagnostics (Irisawa, [0093]; [0161]). This leads to improved clinical outcomes. Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Irisawa as applied to claim 2 above, and further in view of Zwirn (U.S. Pub. No. 20110077526) hereinafter Zwirn. Regarding claim 4, the combined references of Igarashi, Yoshikawa, Norman, and Irisawa teach all of the limitations of claim 2. Primary reference Igarashi further fails to teach: wherein estimating the blood flow measurement at each pixel comprises: applying logarithmic compression to each filtered photoacoustic image; and estimating a blood velocity value at each pixel of each filtered photoacoustic image However, the analogous art of Zwirn of an ultrasound assembly for processing ultrasound signals of a patient (abstract) teaches: wherein estimating blood flow measurements comprises: applying logarithmic compression to each image data ([0268]); and estimating a blood velocity value at each pixel of each image ([0360]-[0365], blood flow velocity is calculated at each pixel for color doppler imaging). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Irisawa to incorporate the logarithmic compression as taught by Zwirn because it reduces the dynamic range which leads to improved signal quality (Zwirn, [0268]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Irisawa to incorporate the blood velocity measurement at each pixel as taught by Zwirn because it provides color visual depiction of fluid flow at tissue areas of interest over time (Zwirn, [0360]-[0365]). This provides additional visual diagnostic information to a user, leading to improved clinical outcomes. Regarding claim 6, the combined references of Igarashi, Yoshikawa, Norman, and Irisawa teach all of the limitations of claim 4. Primary reference Igarashi further teaches: further comprising generating a vector map from the one or more estimated blood velocity values ([0074]; [0083]-[0089], vector calculation and vector imaging display forms a vector map from the blood velocity values; [0092]; [0103]; [0108], vectors; [0116]-[0146], further teach to vector calculation and vector mapping; [0177]). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Irisawa as applied to claim 4 above, and further in view of Tamura (U.S. Pub. No. 20090112096) hereinafter Tamura. Regarding claim 5, the combined references of Igarashi, Yoshikawa, Norman, and Irisawa teach all of the limitations of claim 4. Primary reference Igarashi further fails to teach: further comprising applying a noise floor filter to remove one or more estimated blood velocity values However, the analogous art of Tamura of a Doppler spectrum analysis method for ultrasound imaging (abstract) teaches: further comprising applying a noise floor filter to remove one or more estimated blood velocity values ([0008]-[0009]; [0034], noise floor; [0039]-[0046]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Irisawa to incorporate the noise floor filter as taught by Tamura because it enables further control of the gain of the system based upon the noise level thresholds of the device (Tamura, [0008]-[0009]). By dynamically adjusting the gain according to a noise floor filter, this provides higher quality output data for the system leading to improved images. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Irisawa as applied to claim 2 above, and further in view of Xie et al. (U.S. Pub. No. 20210145399) hereinafter Xie. Regarding claim 7, the combined references of Igarashi, Yoshikawa, Norman, and Irisawa teach all of the limitations of claim 2. Primary reference Igarashi further fails to teach: wherein estimating the blood flow measurement comprises determining an axial velocity value at each pixel of each filtered photoacoustic image However, the analogous art of Xie of a visualization and quantification system for ultrasound imaging data of fluid flow (abstract) teaches: wherein estimating the blood flow measurement comprises determining an axial velocity value at each pixel of each image ([0032], axial velocity at each pixel of the region provides for a color Doppler map of the measured blood flow region). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Irisawa to incorporate the axial velocity measurements at each pixel for a color Doppler map as taught by Xie because it provides for a more accurate visualization of parameters associated with the flow which leads to improved clinical diagnostics (Xie, [0032]). Regarding claim 8, the combined references of Igarashi, Yoshikawa, Norman, Irisawa, and Xie teach all of the limitations of claim 7. Primary reference Igarashi further fails to teach: further comprising constructing a color Doppler map from the axial velocity value determined at each pixel of each filtered photoacoustic image However, the analogous art of Xie of a visualization and quantification system for ultrasound imaging data of fluid flow (abstract) teaches: further comprising constructing a color Doppler map from the axial velocity value determined at each pixel of each image ([0032], axial velocity at each pixel of the region provides for a color Doppler map of the measured blood flow region). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, Irisawa, and Xie to incorporate the axial velocity measurements at each pixel for a color Doppler map as taught by Xie because it provides for a more accurate visualization of parameters associated with the flow which leads to improved clinical diagnostics (Xie, [0032]). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Irisawa as applied to claim 2 above, and further in view of Eriksen et al. (U.S. Pat. No. 6409671) hereinafter Eriksen. Regarding claim 9, the combined references of Igarashi, Yoshikawa, Norman, and Irisawa teach all of the limitations of claim 2. Primary reference Igarashi further fails to teach: wherein estimating the blood flow measurement comprises determining a mean intensity value at of each pixel of each filtered photoacoustic image However, the analogous art of Eriksen of a vasculated tissue ultrasound imaging procedure (abstract) teaches: wherein estimating the blood flow measurement comprises determining a mean intensity value at of each pixel of each image (col 8, lines 19-52, “Intensities of the power Doppler signals were digitised from 256 successive frames of the videotape, thereby generating a two dimensional Doppler intensity image in which each pixel derived from a sequence of 256 time samples with a time step of 40 ms. The average intensity levels for each time series were subtracted, a Welch window was used on the data set along the time axis and a fast Fourier transform was performed to generate the power spectrum and phase spectrum of the original data. A new image was created by letting power at the heart frequency represent intensity and assigning a colour map to the phase so that the colour of the image was determined by the phase in each pixel.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Irisawa to incorporate the mean intensity measurement at each pixel for construction a power Doppler color map as taught by Eriksen because it enables visualization and determination across cardiac phases for arterial structures, leading to more efficient diagnosis of arterial diseases (Eriksen, col 8, lines 19-52). Regarding claim 10, the combined references of Igarashi, Yoshikawa, Norman, Irisawa, and Eriksen teach all of the limitations of claim 9. Primary reference Igarashi further fails to teach: further comprising constructing a power Doppler map from the mean intensity value at each pixel of each filtered photoacoustic image However, the analogous art of Eriksen of a vasculated tissue ultrasound imaging procedure (abstract) teaches: further comprising constructing a power Doppler map from the mean intensity value at each pixel of each image (col 8, lines 19-52, “Intensities of the power Doppler signals were digitised from 256 successive frames of the videotape, thereby generating a two dimensional Doppler intensity image in which each pixel derived from a sequence of 256 time samples with a time step of 40 ms. The average intensity levels for each time series were subtracted, a Welch window was used on the data set along the time axis and a fast Fourier transform was performed to generate the power spectrum and phase spectrum of the original data. A new image was created by letting power at the heart frequency represent intensity and assigning a colour map to the phase so that the colour of the image was determined by the phase in each pixel.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, Irisawa, and Eriksen to incorporate the mean intensity measurement at each pixel for construction a power Doppler color map as taught by Eriksen because it enables visualization and determination across cardiac phases for arterial structures, leading to more efficient diagnosis of arterial diseases (Eriksen, col 8, lines 19-52). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in further view of Norman as applied to claim 1 above, and further in view of Zharov et al. (U.S. Pub. No. 20120065490) hereinafter Zharov Regarding claim 11, the combined references of Igarashi, Yoshikawa, and Norman teach all of the limitations of claim 1. Primary reference Igarashi further fails to teach: wherein the one or more blood vessels are being imaged at a depth of (i) more than 5 mm (ii) up to 1 cm However, the analogous art of Zharov of a method and system for measurement of vessels within a living subject using photoacoustic imaging (abstract) teaches: wherein the one or more blood vessels are being imaged at a depth of (i) more than 5 mm (ii) up to 1 cm ([0078], the range of depth is more than 5 mm and overlaps the claimed range). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, and Norman to incorporate the measurement of a vessel at a depth of over the range of 5 mm to 1 cm as taught by Zharov because it provides for adequate imaging at depths beneath the surface of the skin at which diseased vessels of interest would be located (see Zharov, [0078]). This provides for a wider range of regions of interest that the device may provide accurate diagnostics, leading to improved clinical versatility. Claims 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Zwirn. Regarding claim 14, primary reference Igarashi teaches: A photoacoustic computed tomography system (abstract), comprising: one or more light sources ([0175]-[0178], teaches to the time series of photoacoustic imaging (sequence of photoacoustic images) based upon image data from the ultrasound diagnostics of laser light optical excitation (laser light being one or more light sources); Note that this photoacoustic application of the device is further supported by the teachings of the embodiments of the device as disclosed in [0028]-[0164]); an ultrasonic transducer array having an axis ([0175]-[0178], teaches to the time series of photoacoustic imaging (sequence of photoacoustic images) based upon image data from the ultrasound diagnostics of laser light optical with the ultrasound device including an array of transducers as in the teachings of the embodiments of the device as disclosed in [0028]-[0164] that support the photoacoustic embodiment); and a computing system configured to execute instructions to: reconstruct a sequence of photoacoustic images based on acoustic data detected by the ultrasonic transducer array ([0175]-[0178], teaches to the time series of photoacoustic imaging (sequence of photoacoustic images) based upon image data from the ultrasound diagnostics of laser light optical excitation; Note that this photoacoustic application of the device is further supported by the teachings of the embodiments of the device as disclosed in [0028]-[0164]); extract at least one blood component from each of the photoacoustic images ([0175]-[0178], vectors expressing moving of the red blood cells form extracted blood components from each of the time series of photoacoustic images); and Primary reference Igarashi fails to particularly teach: applying a spatiotemporal filter to extract at least one blood component within lumen of one or more blood vessels from each photoacoustic images to generate corresponding filtered photoacoustic images of the one or more blood vessels Estimate a blood flow measurement of each pixel of each filtered photoacoustic image However, the analogous art of Yoshikawa of an ultrasonic wave transmission and movement vector calculation system (abstract) teaches: applying a spatiotemporal filter to extract at least one blood component within lumen of one or more blood vessels from each acoustic images to generate corresponding filtered acoustic images of the one or more blood vessels ([0044], blood flow within the blood vessel forms a blood component within lumen of one or more blood vessels; [0045], separation filter 44 forms a spatiotemporal filter for extraction of movement components including blood flow; [0046], blood flow vector distribution; [0047]-[0051]; [0054]-[0057], as depicted in figure 2, screen 210 with extracted image 204b, the blood component within the lumens of one or more blood vessels are extracted and generated as a filtered image for display to the user; [0071]-[0074]; [0075], the separation filter 45 includes separation of tissues for a blood flow component using both “a temporal identity and a spatial identity” which forms a spatiotemporal filter as the blood flow component has unique temporal and spatial identities that enable filtering; [0076]-[0081]; [0082], the vectors related to blood flow are separated using the separation filter 45, which forms an extraction of at least one blood component from the acoustic images; [0083]; [0087], “the movement vector component distributions 204a to 204c extracted as described above on the display unit 60 as shown in the screen 210 (see FIG. 2) (step S29).”) Estimate a blood flow measurement of each pixel of each filtered acoustic image ([0045]; [0046], blood flow vector distribution; [0047]-[0051]; [0071], “the movement amount measurement unit 402 calculates a movement amount of each point (pixel) set vertically and horizontally in an imaging region by using the nine pieces of two-dimensional data acquired at least twice in a time series” the movement amount at each point forms a blood flow measurement at each pixel, when the vectors are calculated for blood flow; [0072]-[0074]; [0075], the separation filter 45 includes separation of tissues for a blood flow component using both “a temporal identity and a spatial identity” which forms a spatiotemporal filter as the blood flow component has unique temporal and spatial identities that enable filtering; [0076]-[0081]; [0082], the vectors related to blood flow are separated using the separation filter 45, which forms an extraction of at least one blood component from the acoustic images and “basically, the separation filter (A) 45 extracts the point of the high rank eigenvalue component from the movement vector distribution” which forms a blood flow measurement at each pixel (point); [0083]; [0087], movement vector distributions forms the estimation of blood flow measurement at each pixel) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi to incorporate the spatiotemporal filter extraction of blood flow measurements at each pixel as taught by Yoshikawa because the varying spatial and temporal properties of movements of blood flow, tissue regions, and periodic physiological movements enable filterable distinctions between each movement type (Yoshikawa, [0045]-[0051]; [0057]). This provides for higher quality extraction of blood components which leads to additional diagnosis information of features such as micro vessels and blood flow dynamics, leading to improved clinical diagnostic outcomes. Primary reference Igarashi further fails to teach: Applying a spatiotemporal filter to the sequence of photoacoustic images to extract at least one blood component within lumen of one or more blood vessels from each photoacoustic image in the sequence of photoacoustic images to generate a corresponding sequence of filtered photoacoustic images of the one or more blood vessels However, the analogous art of Norman of an ultrasound brain imaging and processing system (abstract) teaches: Applying a spatiotemporal filter to the sequence of acoustic images to extract at least one blood component within lumen of one or more blood vessels from each acoustic image in the sequence of acoustic images to generate a corresponding sequence of filtered acoustic images of the one or more blood vessels ([0186], spatiotemporal filter to separate blood echoes from tissue; [0189], “Singular value decomposition (SVD) to discriminate red blood cell motion from tissue motion and extracted the Doppler signal in each ensemble of 250 coherently compounded frames. The resulting images were then stored in a 3D array of 2D images in time series”. The red blood cell motion is extracted across time series frames, which forms a sequence of acoustic images, and the stored resulting images in the 3D array forming a time series is the generated corresponding sequence of filtered acoustic images of the one or more blood vessels; see also [0064], Doppler imaging of blood flow; [0099], “Pre-processing the acquired images further includes performing applying a filtering process to the plurality of ultrasound images to separate tissue movement from red blood cell movement. In one example, a spatiotemporal clutter filtering based on single value decomposition may be employed”; [0117], blood cell signals; [0205] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi and Yoshikawa to incorporate the extraction of a blood component over a sequence of images and generation of a sequence of filtered images as taught by Norman because hemodynamic activity over time can provide additional insight into the activation of regions of interest, leading to additional diagnostic information of a patient’s target region of interest (Norman, [0099]-[0100]). This leads to improved estimations of blood flow change over time, during a real-time procedure, and provides more efficient clinical diagnostics. Primary reference Igarashi fails to particularly teach: One or more optical elements in optical communication with the one or more light sources, the one or more optical elements configured to propagate light from the one or more light sources to an object being imaged during operation However, the analogous art of Zwirn of an ultrasound assembly for processing ultrasound signals of a patient (abstract) teaches: One or more optical elements in optical communication with the one or more light sources, the one or more optical elements configured to propagate light from the one or more light sources to an object being imaged during operation ([0383]; [0390], “The sources of light may include, for example, laser relayed by optic fibers. Lenses may optionally be used to increase the laser beams' coverage area.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, and Norman to incorporate the optical elements for directing the light to an object under imaging as taught by Zwirn because it provides for efficient relaying of light to a target of interest and increasing of a beams’ coverage area (Zwirn, [0390]). This increases the accuracy and quality of light deposited to a region of interest, leading to improved imaging quality. Regarding claim 19, the combined references of Igarashi, Yoshikawa, Norman, and Zwirn teach all of the limitations of claim 14. Primary reference Igarashi further fails to teach: wherein the one or more optical elements comprises one or more of a fiber optic strand, a beam steering device, a beam-splitter, an optical fiber, a relay, or a beam combiner However, the analogous art of Zwirn of an ultrasound assembly for processing ultrasound signals of a patient (abstract) teaches: wherein the one or more optical elements comprises one or more of a fiber optic strand, a beam steering device, a beam-splitter, an optical fiber, a relay, or a beam combiner ([0383]; [0390], “The sources of light may include, for example, laser relayed by optic fibers”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Zwirn to incorporate the optical elements of an optical fiber for directing the light to an object under imaging as taught by Zwirn because it provides for efficient relaying of light to a target of interest (Zwirn, [0390]). This increases the accuracy and quality of light deposited to a region of interest, leading to improved imaging quality. Claims 15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Zwirn as applied to claim 14 above, and further in view of Rincker et al. (U.S. Pub. No. 20150297176) hereinafter Rincker. Regarding claim 15, the combined references of Igarashi, Yoshikawa, Norman, and Zwirn teach all of the limitations of claim 14. Primary reference Igarashi further fails to teach: further comprising a scanning mechanism configured to scan the ultrasonic transducer array along the axis However, the analogous art of Rincker of a head frame for alignment of an ultrasound probe at particular regions of interest of a patient (abstract) teaches: further comprising a scanning mechanism configured to scan the ultrasonic transducer array along the axis ([0035], motorized assembly 408; [0037], teaches to use with photoacoustic systems as taught by Igarashi in the combined invention; The limitation of “scanning mechanism” has been interpreted under 35 U.S.C. 112(f) to correspond to the structure disclosed in paragra0ph [0035], [0036], [0056], and [0063] of the applicant’s specification of a physical translation and/or rotation mechanism for moving an ultrasound transducer array. This corresponds to the motor assist probe rotation/angulation and translation and probe as in [0035] of Rincker). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Zwirn to incorporate the motorized movement assembly for scanning the ultrasound probe along an axis to attain an optimal acoustic window as taught by Rincker because obtaining an optimal position of the transducer array along the imaging axis provides for the highest quality signal acquisition (Rincker, [0035]). By increasing the quality of the signal, imaging can be improved leading to enhanced diagnostics. Regarding claim 18, the combined references of Igarashi, Yoshikawa, Norman, Zwirn, and Rincker teach all of the limitations of claim 15. Primary reference Igarashi further fails to teach: wherein the scanning mechanism comprises a stage coupled to the ultrasonic transducer array, the stage configured to translate and/or rotate the ultrasonic transducer array However, the analogous art of Rincker of a head frame for alignment of an ultrasound probe at particular regions of interest of a patient (abstract) teaches: wherein the scanning mechanism comprises a stage coupled to the ultrasonic transducer array, the stage configured to translate and/or rotate the ultrasonic transducer array ([0035], motorized assembly 408 forms the stage coupled to the transducer array configured for both translation and rotation; [0037], teaches to use with photoacoustic systems as taught by Igarashi in the combined invention) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, Zwirn, and Rincker to incorporate the motorized movement assembly for scanning the ultrasound probe along an axis to attain an optimal acoustic window as taught by Rincker because obtaining an optimal position of the transducer array along the imaging axis provides for the highest quality signal acquisition (Rincker, [0035]). By increasing the quality of the signal, imaging can be improved leading to enhanced diagnostics. Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in view of Norman, in further view of Zwirn as applied to claim 14 above, and further in view of Irisawa. Regarding claim 16, the combined references of Igarashi, Yoshikawa, Norman, and Zwirn teach all of the limitations of claim 14. Primary reference Igarashi further fails to teach: wherein the computing system is further configured to generate one or more blood flow maps from the blood flow measurements estimated However, the analogous art of Irisawa of an acoustic wave imaging system for displaying tissue regions of interest (abstract) teaches: wherein the computing system is further configured to generate one or more blood flow maps from the blood flow measurements estimated ([0093], color flow map of blood flow; [0161], color flow map of blood flow in the region of interest). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Zwirn to incorporate the blood flow mapping as taught by Irisawa because it provides the user with visual depictions of the blood measurements such as velocity, which enables intuitive understanding of tissue diagnostics (Irisawa, [0093]; [0161]). This leads to improved clinical outcomes. Regarding claim 17, the combined references of Igarashi, Yoshikawa, Norman, Zwirn, and Irisawa teach all of the limitations of claim 16. Primary reference Igarashi further fails to teach: wherein the one or more blood flow maps comprise one or more of a vector map, a color Doppler map, or a power Doppler map However, the analogous art of Irisawa of an acoustic wave imaging system for displaying tissue regions of interest (abstract) teaches: wherein the one or more blood flow maps comprise one or more of a vector map, a color Doppler map, or a power Doppler map ([0093], color flow map of blood flow is a color Doppler map; [0161], color flow map of blood flow in the region of interest is a color Doppler map). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, Zwirn, and Irisawa to incorporate the color Doppler blood flow mapping as taught by Irisawa because it provides the user with visual depictions of the blood measurements such as velocity, which enables intuitive understanding of tissue diagnostics (Irisawa, [0093]; [0161]). This leads to improved clinical outcomes. Claims 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, in view of Yoshikawa, in further view of Norman as applied to claim 1 above, and further view of Zwirn. Regarding claim 20, the combined references of Igarashi, Yoshikawa, and Norman teach all of the limitations of claim 1. Primary reference Igarashi further fails to teach: further comprising forming a three-dimensional flow measurement structure from the blood flow measurements estimated at all pixels of the filtered photoacoustic images However, the analogous art of Zwirn of an ultrasound assembly for processing ultrasound signals of a patient (abstract) teaches: further comprising forming a three-dimensional flow measurement structure from the blood flow measurements estimated at all pixels of the images ([0363], “Color flow Doppler imaging, which uses PW, superimposes a color representation of the dominant radial blood flow velocity (for each pixel) over a 2D or 3D ultrasonic image, which may or may not be time dependent”; [0370]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, and Norman to incorporate the 3D flow measurement at all pixels in color doppler imaging as taught by Zwirn because it provides a pixel-based reconstruction of complex fluid flow patterns within vessels of interest, leading to a more complete visualization provided to a user. This leads to enhanced understanding of disease states by a clinician, and improved clinical outcomes for a patient (Zwirn, [0363]; [0370]). Regarding claim 21, the combined references of Igarashi, Yoshikawa, Norman, and Zwirn teach all of the limitations of claim 20. Primary reference Igarashi further fails to teach: wherein the three- dimensional flow measurement structure comprises two spatial dimensions and one time dimension However, the analogous art of Zwirn of an ultrasound assembly for processing ultrasound signals of a patient (abstract) teaches: wherein the three- dimensional flow measurement structure comprises two spatial dimensions and one time dimension ([0363], “Color flow Doppler imaging, which uses PW, superimposes a color representation of the dominant radial blood flow velocity (for each pixel) over a 2D or 3D ultrasonic image, which may or may not be time dependent”. Time-dependent measurements forms a two spatial dimension and one time dimension 3D flow measurement structure; [0370]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the blood flow photoacoustic measurement method of Igarashi, Yoshikawa, Norman, and Zwirn to incorporate the 3D flow measurement at all pixels in color doppler imaging over time as taught by Zwirn because it provides a pixel-based reconstruction of complex fluid flow patterns within vessels of interest, leading to a more complete visualization provided to a user. This leads to enhanced understanding of disease states by a clinician, and improved clinical outcomes for a patient (Zwirn, [0363]; [0370]). Response to Arguments Applicant’s arguments with respect to claims 1-11 and 13-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Responses to arguments relevant to the current rejections are detailed below. Regarding the applicant’s arguments on pages 6-7 of the remarks, the applicant argues that the Yoshikawa reference fails to generate an image of blood vessels from the spatial filter. While in the current rejections, the additional prior art reference of Norman is utilized to teach to the generation of a corresponding sequence of filtered images, Yoshikawa also teaches to generation of an image of blood vessels as disclosed in figure 2, images 210, particularly image 204b which depicts blood vessels. The extraction of blood flow vector distribution ([0045]) and filtering of the input ultrasound image is used to generate this subsequent image and therefore generates an image of blood vessels. The combined prior art references as a whole teach to the claimed elements as provided above in the current rejections. For these reasons, the applicant’s arguments have been considered but are not persuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN A FRITH whose telephone number is (571)272-1292. The examiner can normally be reached M-Th 8:00-5:30 Second Fri 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 571-270-1790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN A FRITH/Primary Examiner, Art Unit 3798
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
Sep 25, 2024
Non-Final Rejection — §103
Jan 27, 2025
Response Filed
Feb 15, 2025
Final Rejection — §103
Apr 21, 2025
Request for Continued Examination
Apr 23, 2025
Response after Non-Final Action
Apr 30, 2025
Non-Final Rejection — §103
Sep 30, 2025
Response Filed
Jan 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594042
DEVICE FOR MOVING A MEDICAL OBJECT AND METHOD FOR PROVIDING A CORRECTION PRESET
2y 5m to grant Granted Apr 07, 2026
Patent 12594128
LOCKING AND DRIVE MECHANISMS FOR POSITIONING AND STABILIZATION OF CATHETERS AND ENDOSCOPIC TOOLS
2y 5m to grant Granted Apr 07, 2026
Patent 12594119
SHOCK WAVE BALLOON CATHETER WITH MULTIPLE SHOCK WAVE SOURCES
2y 5m to grant Granted Apr 07, 2026
Patent 12588964
MEDICAL INSTRUMENT GUIDANCE WITH ROBOTIC SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12569224
Intravascular Imaging Devices
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
60%
Grant Probability
89%
With Interview (+28.7%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month