Prosecution Insights
Last updated: April 19, 2026
Application No. 18/690,239

REAL-TIME SUPER-RESOLUTION ULTRASOUND MICROVESSEL IMAGING AND VELOCIMETRY

Final Rejection §102§103
Filed
Mar 07, 2024
Examiner
TALTY, MARIA CHRISTINA
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Board Of Trustees Of The University Of Illinois
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
75 granted / 121 resolved
-8.0% vs TC avg
Strong +33% interview lift
Without
With
+32.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
44 currently pending
Career history
165
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 121 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s argument on Page 11 regarding the objections to the drawings has been fully considered. The objections to the drawings are withdrawn in view of the amendments. Applicant’s argument on Page 11 regarding the objections to the specification has been fully considered. While a space has been inserted between “23H” and “demonstrate” in [0099], the trade names or marks used in commerce stated in the Office Action have not been addressed. Therefore, the objection to the specification is maintained. Applicant’s argument on Page 12 regarding the objections to Claims 1, 12, and 22 has been fully considered. The objections to Claims 1, 12, and 22 are withdrawn in view of the amendments. Applicant’s argument on Pages 12-13 regarding the rejection of Claims 1-2, 4-9, 11-22, and 24-27 under 35 U.S.C. 112(b) has been fully considered. The rejection Claims 1-2, 4-9, 11-22, and 24-27 under 35 U.S.C. 112(b) is withdrawn in view of the argument. Applicant’s argument on Pages 13-14 regarding the rejection of Claims 12-20 under 35 U.S.C. 112(d) has been fully considered. The rejection of Claims 12-20 under 35 U.S.C. 112(d) is withdrawn in view of the amendments. Applicant’s argument on Pages 14-15 regarding the rejection of Claim 1 under 35 U.S.C. 102(a)(1) as being anticipated by Lok has been fully considered but is not persuasive under new grounds of rejection as below. Regarding the rejection of all remaining corresponding claims, applicant’s argument submitted on Page 15 relies on the supposed deficiencies with respect to the rejection of parent Claim 1. Applicant’s argument is moot for the same reasons detailed above. Specification The use of the terms MicrofilTM in [0049], BluetoothTM and Wi-FiTM in [00107], [00109], [00112], and [00116], Blu-rayTM in [00118] which are trade names or marks used in commerce, have been noted in this application. The terms should be accompanied by the generic terminology; furthermore the terms should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the terms. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 4-9, 11-22, 24-26, and 32 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Trzasko et al. (WO 2020252463). The applied reference has a common inventor with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Regarding Claim 1, Trzasko teaches a method for super-resolution microvessel imaging using an ultrasound system, (Claim 1 “A method for generating an image of microvasculature in a subject from ultrasound data”), the method comprising: a) acquiring ultrasound signal data from a subject using the ultrasound system (Claim 1 “accessing ultrasound data acquired from a subject”); b) accessing, with a computer system, a neural network that has been trained on training data, ([0077] “One or more neural networks (or other suitable machine learning algorithms) are trained on the training data” and [0081] “A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system […]. Accessing the trained neural network may include accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data”), to estimate at least one of super-resolution microvessel image data, ([0083] “The properties or characteristics of the microbubbles are then input to the one or more trained neural networks, generating output as separated subsets of data, as indicated at step 1106. The output data generated by inputting the microbubble signal data to the trained neural network(s) can then be displayed to a user, stored for later use or further processing, or both, as indicated at step 1108, and described above in more detail. For example, the separated subsets of data can be further processed to generate super-resolution images”), or super-resolution ultrasound velocimetry data from ultrasound signals ([0023] “the microbubble signals can be separated based on the differences in spatiotemporal hemodynamic of microbubbles such as movement speed, movement direction, and decorrelation,” [0031] “Microbubble signal data are generated by extracting microbubble signals from the ultrasound data, as indicated at step 204,” and [0051] “microvessel hemodynamics measurements (e.g., blood flow speed and blood flow volume) can be estimated from the combined microvessel image or images.”); c) inputting the ultrasound signal data to the neural network via the computer system generating super-resolution ultrasound velocimetry data ([0072] “the microbubble signal data can be input into an appropriately trained machine learning algorithm, generating output as subsets of separated microbubble signal data” and [0083] “properties or characteristics of the microbubbles are then input to the one or more trained neural networks”); and d) providing the super-resolution ultrasound velocimetry data to a user via the computer system ([0083] “The output data generated by inputting the microbubble signal data to the trained neural network(s) can then be displayed to a user”). Regarding Claim 2, Trzasko teaches the method of Claim 1, as discussed above. Furthermore, Trzasko teaches wherein the training data and the input data comprise contrast-enhanced spatiotemporal ultrasound signal data ([0083] “properties or characteristics of the microbubbles are then input to the one or more trained neural networks” and [0084] “methods described in the present disclosure have been described with respect to signal separation for super-resolution micro vessel Imaging under the context of microbubble imaging they can also be applied to ultrasound Imaging with any other type of contrast agent”). Regarding Claim 4, Trzasko teaches the method of Claim 1, as discussed above. Furthermore, Trzasko teaches wherein the neural network comprises at least one of a pre-trained neural network ([0081] “accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.”), and an online trained neural network ([0097] “communication network 1354 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g, a Bluetooth network), a cellular network (e.g, a 3G network a 4G network etc, complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, WiMAX, etc.), a wired network and so on. In some embodiments, communication network 1354 can be a local area network a wide area network a public network (e.g, the Internet), a private or semi-private network (e.g, a corporate or university intranet), any other suitable type of network or any suitable combination of networks.”). Regarding Claim 5¸Trzasko teaches all limitations of Claim 4, as discussed above. Furthermore, Trzasko teaches wherein the neural network is a pre-trained neural network, ([0081] “Accessing the trained neural network may include accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.”), and the training data comprise at least one of synthetic training data, in vitro training data, ex vivo training data, or in vivo training data ([0076] “the training microbubble data can be obtained from any type of flow phantom, flow channel, water tank, or in vivo vessel using certain ultrasound systems with injections of microbubbles.”). Regarding Claim 6, Trzasko teaches all limitations of Claim 5, as discussed above. Furthermore, Trzasko teaches wherein the training data comprise synthetic training data generated by computer simulation of spatiotemporal microbubble signals observed using ultrasound (Fig. 1, [0029] “The ultrasound data can be acquired using any suitable detection sequence. [sic] including […] synthetic aperture Imaging,” [0040] “the microbubble signal data can be separated into subsets by inputting the microbubble signal data to a suitably trained machine learning algorithm, or other artificial intelligence-based algorithms, generating output as subsets of separated microbubble signal data,” and [0055] “any processing algorithm that can distinguish microbubble signals with different moving speeds/directions and separate them into different subsets can be applied here,” and [00108] “any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein.” Where a known physics-based simulation software (e.g. COMSOL Multiphysics) would be obvious to one of ordinary skill in the art to provide the synthetic training data.). Regarding Claim 7, Trzasko teaches all limitations of Claim 6, as discussed above. Furthermore, Trzasko teaches wherein the computer simulation comprises at least one of a direct convolution between a microbubble location and one of point-spread-function (PSF), ([0044] “the microbubbles can be localized based on a two-dimensional normalized cross-correlation-based method that focuses on detecting structures with good correlation to the point-spread-function fPSF") of the ultrasound system used to acquire the microbubble signal data”), or impulse response (IR) of the ultrasound system. Regarding Claim 8, Trzasko teaches all limitations of Claim 7, as discussed above. Furthermore, Trzasko teaches wherein the computer simulation is implemented with the computer system using a Field II simulation, a K-wave simulation, or a generative neural network ([0074] “the generation of training data is an important consideration when constructing a machine learning algorithm for a specific task. As one example, computational simulations, phantom experiments, or both, can be used to generate microbubble data as a training set; which can then be used to train a suitable AI algorithm, such as a machine learning algorithm” and [0075] “When the training data include computational simulations, the ultrasound signals of microbubbles with different characteristics such as concentrations, hemodynamics (e.g, moving velocity, direction) and acoustical properties (e.g., intensities, blinking behaviors, and frequency response to the sonifying ultrasound waves) can be simulated. The point spread function ("PSF”) of the ultrasound image in the computational simulation can be varied according to practical Imaging situations. The PSF can be experimentally measured or can be approximated using a Gaussian model or other suitable models.”). Regarding Claim 9, Trzasko teaches all limitations of Claim 5, as discussed above. Furthermore, Trzasko teaches wherein the training data comprise in vitro training spatiotemporal data acquired from at least one of a tissue-mimicking phantom and a point target ([0076] “the training microbubble data can be obtained from any type of flow phantom, flow channel, water tank, or in vivo vessel using certain ultrasound systems with injections of microbubbles.”). Regarding Claim 11, Trzasko teaches all limitations of Claim 1, as discussed above. Furthermore, Trzasko teaches wherein training data comprise in vivo spatiotemporal training data acquired by at least one of ultrasound imaging of in vivo tissue or simultaneous optical and ultrasound imaging of in vivo tissue ([0075]-[0076] “These simulated microbubble data can be labeled with different properties or characteristics and allocated into different subsets, and serve as the training data for machine learning algorithms. […] the training microbubble data can be obtained from any type of flow phantom, flow channel, water tank, or in vivo vessel using certain ultrasound systems with injections of microbubbles.”). Regarding Claim 12, Trzasko teaches all limitations of Claim 1, as discussed above. Furthermore, Trzasko teaches wherein accessing the neural network with the computer system comprises training the neural network with the computer system by: a) accessing the training data with the computer system ([0081] “A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system, as indicated at step 1104. Accessing the trained neural network may include accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.”); b) training the neural network on the training data ([0081] “Accessing the trained neural network may include accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.”); c) storing the trained neural network ([0051] “After processing, the microvessel images (e-g, the combined microvessel image(s), the subset microvessel images, or both) can be displayed to a user or stored for later use, such as for later analysis” and [0081] “Accessing the trained neural network may include accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data.”); and d) accessing the trained neural network with the computer system ([0081] “A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system”). Regarding Claim 13, Trzasko teaches all limitations of Claim 12, as discussed above. Furthermore, Trzasko teaches wherein accessing the training data comprise acquiring spatiotemporal imaging data from an in vivo vascular bed ([0021] “Ultrasound data are acquired from a region-of- interest in a subject who has been administered a microbubble contrast agent The ultrasound data are acquired while the microbubbles are moving through, or otherwise present in, the region-of-interest The region-of-interest may include, for instance, microvessels or other microvasculature in the subject,” [0075] “These simulated microbubble data can be labeled with different properties or characteristics and allocated into different subsets, and serve as the training data for machine learning algorithms,” and [0076] “the training microbubble data can be obtained from any type of flow phantom, flow channel, water tank, or in vivo vessel using certain ultrasound systems with injections of microbubbles.”). Regarding Claim 14, Trzasko teaches all limitations of Claim 13, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal data comprise spatiotemporal imaging data (Fig. 1 and [0026] “The echo signals of microbubbles with different properties such as spatiotemporal hemodynamic and acoustic properties are indicated with different marks and colors in FIG. 1. Unit 102 indicates the conventional vessel image formed with overlapping echo signals of microbubbles, where the spatial resolution is physically limited by the wavelength of ultrasound.”). Regarding Claim 15, Trzasko teaches all limitations of Claim 14, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal acoustic imaging data are indicative of at least one of anatomical information, dynamic information, or contrast-enhanced information ([0023] “the microbubble signals can be separated based on the differences in spatiotemporal hemodynamic of microbubbles such as movement speed, movement direction, and decorrelation. In another example, the microbubble signals can be separated by differences in acoustical properties of individual microbubbles, such as linear or nonlinear frequency responses to the sonifying ultrasound wave,” [0026] “The echo signals of microbubbles with different properties such as spatiotemporal hemodynamic and acoustic properties are indicated with different marks and colors in FIG. 1. Unit 102 indicates the conventional vessel image formed with overlapping echo signals of microbubbles, where the spatial resolution is physically limited by the wavelength of ultrasound,” and [0027] “using an ultrasound system to produce super-resolution images of microvessels in a subject who has been administered a microbubble contrast agent”). Regarding Claim 16, Trzasko teaches all limitations of Claim 14, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal imaging data further include spatiotemporal optical imaging data ([0026] “Unit 102 indicates the conventional vessel image formed with overlapping echo signals of microbubbles, where the spatial resolution is physically limited by the wavelength of ultrasound” and [0045] “Any suitable image registration algorithm can be applied, including but not limited to global or local cross-correlation methods, global or local phase-correlation based methods, global or local optical flow methods, and so on.”). Regarding Claim 17, Trzasko teaches all limitations of Claim 16, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal optical imaging data are indicative of at least one of anatomical information, dynamic information, or contrast-enhanced information ([0023] “the microbubble signals can be separated based on the differences in spatiotemporal hemodynamic of microbubbles such as movement speed, movement direction, and decorrelation. In another example, the microbubble signals can be separated by differences in acoustical properties of individual microbubbles, such as linear or nonlinear frequency responses to the sonifying ultrasound wave,” [0026] “The echo signals of microbubbles with different properties such as spatiotemporal hemodynamic and acoustic properties are indicated with different marks and colors in FIG. 1. Unit 102 indicates the conventional vessel image formed with overlapping echo signals of microbubbles, where the spatial resolution is physically limited by the wavelength of ultrasound,” and [0027] “using an ultrasound system to produce super-resolution images of microvessels in a subject who has been administered a microbubble contrast agent”). Regarding Claim 18, Trzasko teaches all limitations of Claim 16, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal acoustic imaging data and the spatiotemporal optical imaging data are spatially coregistered ([0045] “Any suitable image registration algorithm can be applied, including but not limited to global or local cross-correlation methods, global or local phase-correlation based methods, global or local optical flow methods, and so on.”). Regarding Claim 19, Trzasko teaches all limitations of Claim 16, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal acoustic imaging data and the spatiotemporal optical imaging data are synchronously acquired from the in vivo vascular bed ([0076] “the training microbubble data can be obtained from any type of flow phantom, flow channel, water tank, or in vivo vessel using certain ultrasound systems with injections of microbubbles. Ultrasound data acquisition can be performed under various imaging and experimental settings, such as different microbubble concentrations, flowing velocities and directions;, different acoustic transmission and SNR situations, and so on. Again, the training data can be labeled as different microbubble characteristics or different subsets;, and can be used to train a suitable AI algorithm. Then, the trained algorithms can be applied to perform microbubble separation on the target microbubble data to separate them into subsets with sparser microbubble concentrations.”). Regarding Claim 20, Trzasko teaches all limitations of Claim 16, as discussed above. Furthermore, Trzasko teaches synchronizing the spatiotemporal acoustic imaging data acquisition and the spatiotemporal optical imaging data acquisition in space and time to capture matched data ([0045] “Image registration can be performed based on motion estimations from the original acquired ultrasound data, or on the microbubble signal data. Any suitable image registration algorithm can be applied, including but not limited to global or local cross-correlation methods, global or local phase-correlation based methods, global or local optical flow methods, and so on.” Where it is understood that synchronization is present in phase-correlation based methods.). Regarding Claim 21, Trzasko teaches all limitations of Claim 20, as discussed above. Furthermore, Trzasko teaches wherein the spatiotemporal training data are based on the matched data ([0045] “Image registration can be performed based on motion estimations from the original acquired ultrasound data, or on the microbubble signal data. Any suitable image registration algorithm can be applied, including but not limited to global or local cross-correlation methods, global or local phase-correlation based methods, global or local optical flow methods, and so on.”). Regarding Claim 22, Trzasko teaches all limitations of Claim 12, as discussed above. Furthermore, Trzasko teaches wherein training the neural network comprises: a) administering scout microbubbles into the subject to collect microbubble signal data ([0076] “the training microbubble data can be obtained from any type of flow phantom, flow channel, water tank, or in vivo vessel using certain ultrasound systems with injections of microbubbles. Ultrasound data acquisition can be performed under various imaging and experimental settings, such as different microbubble concentrations, flowing velocities and directions;, different acoustic transmission and SNR situations, and so on. Again, the training data can be labeled as different microbubble characteristics or different subsets;, and can be used to train a suitable AI algorithm. Then, the trained algorithms can be applied to perform microbubble separation on the target microbubble data to separate them into subsets with sparser microbubble concentrations.”); b) identifying a scouting microbubble signal in the microbubble signal data ([0042] “identifying locations in each time frame of the microbubble signal data at which microbubbles are located” and [0076] “the training data can be labeled as different microbubble characteristics or different subsets;, and can be used to train a suitable AI algorithm.”); and c) using the scouting microbubble signal to tune the trained neural network ([0076] “the training data can be labeled as different microbubble characteristics or different subsets;, and can be used to train a suitable AI algorithm. Then, the trained algorithms can be applied to perform microbubble separation on the target microbubble data to separate them into subsets with sparser microbubble concentrations.”). Regarding Claim 26, Trzasko teaches all limitations of Claim 1, as discussed above. Furthermore, Trzasko teaches wherein the super-resolution ultrasound velocimetry data comprises at least one flow velocity map indicating flow velocity of microbubbles (Fig. 4, [0048] “the microvessel images can include accumulated microbubble location maps throughout all of the acquisition frames. As another example, the microvessel images can include blood flow speed maps with blood speed values assigned to all the locations at which microbubbles were detected” and [0061] “frequency spectrums of microbubble ultrasound signals from three different size vessels with different flow speeds are plotted in FIG. 4, which roughly reveals the association between Doppler frequency shift and flow velocity: the higher the flowing speed, the larger the Doppler frequency shift is.”). Regarding Claim 32, Trzasko teaches a method for super-resolution microvessel imaging using an ultrasound system, (Claim 1 “A method for generating an image of microvasculature in a subject from ultrasound data”), the method comprising: a) acquiring ultrasound signal data from a subject using the ultrasound system (Claim 1 “accessing ultrasound data acquired from a subject”); b) accessing, with a computer system, a neural network that has been trained on training data, ([0077] “One or more neural networks (or other suitable machine learning algorithms) are trained on the training data” and [0081] “A trained neural network (or other suitable machine learning algorithm) is then accessed with the computer system […]. Accessing the trained neural network may include accessing network parameters (eg, weights, biases, or both) that have been optimized or otherwise estimated by training the neural network on training data”), to estimate both super-resolution microvessel image data, ([0083] “The properties or characteristics of the microbubbles are then input to the one or more trained neural networks, generating output as separated subsets of data, as indicated at step 1106. The output data generated by inputting the microbubble signal data to the trained neural network(s) can then be displayed to a user, stored for later use or further processing, or both, as indicated at step 1108, and described above in more detail. For example, the separated subsets of data can be further processed to generate super-resolution images”), and super-resolution ultrasound velocimetry data from ultrasound signals ([0023] “the microbubble signals can be separated based on the differences in spatiotemporal hemodynamic of microbubbles such as movement speed, movement direction, and decorrelation,” [0031] “Microbubble signal data are generated by extracting microbubble signals from the ultrasound data, as indicated at step 204,” and [0051] “microvessel hemodynamics measurements (e.g., blood flow speed and blood flow volume) can be estimated from the combined microvessel image or images.”); c) inputting the ultrasound signal data to the neural network via the computer system generating super-resolution microvessel image data and super-resolution ultrasound velocimetry data ([0072] “the microbubble signal data can be input into an appropriately trained machine learning algorithm, generating output as subsets of separated microbubble signal data” and [0083] “properties or characteristics of the microbubbles are then input to the one or more trained neural networks”); and d) providing the super-resolution microvessel image data and the super-resolution ultrasound velocimetry data to a user via the computer system ([0083] “The output data generated by inputting the microbubble signal data to the trained neural network(s) can then be displayed to a user”). Regarding Claim 24, Trzasko teaches all limitations of Claim 32, as discussed above. Furthermore, Trzasko teaches wherein the super-resolution ultrasound microvessel image data comprises locations of microbubbles, ([0042] “identifying locations in each time frame of the microbubble signal data at which microbubbles are located. […] the center location of each isolated microbubble signal is located, such that the movement of the microbubble can be tracked through time. The center location of the localized microbubbles can also be used to construct super-resolution microvessel images and to track the movement of the microbubbles”), and providing the super-resolution microvessel image data via the computer system includes pairing, tracking and accumulating a microbubble signal to generate super-resolution vessel maps and super-resolution flow maps ([0048] “the microvessel images can include accumulated microbubble location maps throughout all of the acquisition frames. As another example, the microvessel images can include blood flow speed maps with blood speed values assigned to all the locations at which microbubbles were detected.”). Regarding Claim 25, Trzasko teaches all limitations of Claim 32, as discussed above. Furthermore, Trzasko teaches wherein providing the super-resolution microvessel image data via the computer system comprises performing fast localization and tracking to construct a super-resolution image ([0042] “identifying locations in each time frame of the microbubble signal data at which microbubbles are located. […] the center location of each isolated microbubble signal is located, such that the movement of the microbubble can be tracked through time. The center location of the localized microbubbles can also be used to construct super-resolution microvessel images and to track the movement of the microbubbles”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Trzasko et al. (WO 2020252463) in view of Shandas et al. (US 20140147013). Regarding Claim 27, Trzasko teaches all limitations of Claim 26, as discussed above. However, Trzasko does not explicitly teach wherein the at least one flow velocity map is generated without localization of the microbubbles. In an analogous direct echo particle image velocimetry flow vector mapping on ultrasound dicom images field of endeavor, Shandas teaches a method, (Abstract “Echo PIV analysis process”), wherein the at least one flow velocity map is generated without localization of the microbubbles (Abstract “vector map that are averaged to obtain a mean velocity vector field of the sequential image pairs” and [0268] “peripheral vascular imaging, since the location of vascular is relatively shallow and the bubble image in a small window may provide enough information for successful Echo PIV measurements. However, for cardiac imaging and deep vascular imaging, the field of view (FOV) needed is relatively large, and alternative methods must be proposed to increase the frame rate.”). It would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the teachings of Trzasko with the at least one flow velocity map of X because the modification allows for faster and less computational vascular imaging, as the prior to generating the flow map, the microbubbles do not need to be identified and localized, only their scatter is recognized, thus decreasing the amount of time required for the procedure. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA CHRISTINA TALTY whose telephone number is (571)272-8022. The examiner can normally be reached M-Th 8:30-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mike Carey can be reached at (571) 270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIA CHRISTINA TALTY/ Examiner, Art Unit 3797 /MICHAEL J CAREY/ Supervisory Patent Examiner, Art Unit 3795
Read full office action

Prosecution Timeline

Mar 07, 2024
Application Filed
Jul 07, 2025
Non-Final Rejection — §102, §103
Jan 16, 2026
Response Filed
Mar 06, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588889
ULTRASONIC ENDOSCOPE
2y 5m to grant Granted Mar 31, 2026
Patent 12582374
PERSONALIZED MOTION-GATED CORONARY CTA SCANNING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12569300
SYSTEMS AND METHODS RELATED TO ELONGATE DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12569125
MEDICAL IMAGING SYSTEM, MEDICAL IMAGING PROCESSING METHOD, AND MEDICAL INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12551185
On-Screen Markers For Out-Of-Plane Needle Guidance
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
95%
With Interview (+32.9%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 121 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month