Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,422

SYSTEMS AND METHODS FOR ULTRASOUND IMAGE PROCESSING

Non-Final OA §102§103
Filed
Sep 07, 2023
Examiner
ZHAO, CHRISTINE NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Wuhan United Imaging Healthcare Co. Ltd.
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
11 granted / 18 resolved
-0.9% vs TC avg
Strong +58% interview lift
Without
With
+58.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
19 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The current application claims foreign priority from the Chinese application (CN202211088247.4). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 09/15/2023 and 12/11/2025 are in compliance with the provisions of 37 CFR 1.97 and have been considered by the examiner. Claim Objections Claim 19 is objected to because of the following informalities: claim 19 should be dependent on claim 12 instead of claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 12-13 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fu et al. (NPL "Ultrasound Needle Enhancement System Based on Radon Transform"). Regarding claim 1, Fu discloses a method for ultrasound image processing (page 3, first paragraph: “an ultrasound needle enhancement system based on Radon transform”), implemented on a computing device having at least one processor and at least one storage device, the method comprising: obtaining a first ultrasound image (page 4, first paragraph under 2.1 Acquisition of Normal Organization Frames: “normal tissue image, denoted as NT”) and a second ultrasound image of a target subject (page 4, first paragraph under 2.3 Acquisition of Deflection Image: “deflection frame (SF)”), the first ultrasound image and the second ultrasound image including a linear interventional device within the target subject (Figure 7; page 1, first paragraph under 1. heading: “the puncture needle or injection needle must be inserted into the patient's body at a specific angle”), the first ultrasound image and the second ultrasound image being captured by emitting different ultrasonic waves toward the target subject (page 2, first paragraph: “In addition to the normal vertical emission, a suitable deflection beam is added and its sound wave emission angle should be roughly perpendicular to the needle surface”), and the second ultrasound image having a better imaging quality with respect to the linear interventional device than the first ultrasound image (page 2, first paragraph: “the needle signal on the deflection frame image is stronger”); identifying one or more line segments in the second ultrasound image by processing the second ultrasound image using a first line detection algorithm (Figure 4 shows the deflection frame SF is processed by Radon transform; pages 4-6, under 3.1 Principle of Radon Transform: “The Radon transform can convert spatial line detection to an angle-distance space” where “In the Radon matrix, the horizontal axis represents the deflection angle of the line, the vertical axis represents the distance between the line and the center, and the brightness of each point represents the accumulation of a line in the image”); for each of the one or more line segments, determining a corrected line segment by processing a target region in the second ultrasound image corresponding to the line segment using a second line detection algorithm (Figure 5; pages 9-11, under 3.3 Radon numerical correction: Radon correction is applied to the line segment region BandL); and generating a fused ultrasound image of the first ultrasound image and the second ultrasound image based on one or more corrected line segments corresponding to the one or more line segments (page 11, first paragraph under 3.4 Composite reinforcement of puncture needle: “After obtaining the final straight line through the above steps, a 15-pixel-wide strip region is extended from the straight line in SF. The pixels in this region are then fused with the normal tissue image NT to achieve the purpose of enhancing the puncture needle”). Regarding claim 2, Fu discloses the method of claim 1, wherein the first ultrasound image is captured by emitting ultrasonic waves toward the target subject along a first angle with respect to an insertion direction of the linear interventional device (Figure 2; page 2, first paragraph: the normal tissue image is captured with “normal vertical emission”), the second ultrasound image is captured by emitting ultrasonic waves toward the target subject along a second angle with respect to the insertion direction of the linear interventional device, and the second angle is closer to 90 degrees than the first angle (page 2, first paragraph: the deflection frame is captured with “a suitable deflection beam…its sound wave emission angle should be roughly perpendicular to the needle surface”). Regarding claim 4, Fu discloses the method of claim 1, wherein the second line detection algorithm includes at least one of a Radon transform algorithm (page 3, first paragraph: “the deflection image undergoes two OTSU segments to remove most interference, followed by Radon transform and Radon correction”) or a Gabor filtering algorithm. Regarding claim 12, it is the corresponding system configured to execute the method claimed in claim 1. Therefore, Fu discloses the limitations of claim 12 as it does the limitations of claim 1. Regarding claim 13, it is the corresponding system configured to execute the method claimed in claim 2. Therefore, Fu discloses the limitations of claim 13 as it does the limitations of claim 2. Regarding claim 20, it is the corresponding non-transitory computer readable medium configured to execute the method claimed in claim 1. Therefore, Fu discloses the limitations of claim 20 as it does the limitations of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 3, 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Fu in view of Fan (CN111476790A). Regarding claim 3, Fu discloses the method of claim 1. However, Fu fails to explicitly disclose the first line detection algorithm includes at least one of a Hough transform algorithm, a region growing algorithm, or a machine learning algorithm. In the related art of ultrasound image enhancement, Fan discloses the first line detection algorithm includes at least one of a Hough transform algorithm (Fan paragraph 40: “Perform Hough line detection on the binarized edge image to obtain the straight line position where the puncture needle is located”), a region growing algorithm, or a machine learning algorithm. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fu to incorporate the teachings of Fan to quickly locate the position of the puncture needle (Fan paragraph 0042) with strong stability and robustness (Fan paragraph 0070). Regarding claim 5, Fu discloses the method of claim 1, wherein the identifying one or more line segments in the second ultrasound image by processing the second ultrasound image using a first line detection algorithm comprises: generating a binary image by performing a binary operation on at least a portion of the second ultrasound image (Fu page 8, first paragraph: “Perform an OTSU operation to segment the highlighted portion, obtaining a binary image lmageBaniry”). However, Fu fails to explicitly disclose the first line detection algorithm is a Hough transform algorithm, and obtaining one or more lines by processing the binary image using the Hough transform algorithm; for each of the one or more lines, determining, in the binary image, a plurality of pixel points that correspond to the linear interventional device and are located on the line; and determining a line segment corresponding to the line based on the plurality of pixel points. In related art, Fan discloses the first line detection algorithm is a Hough transform algorithm, and obtaining one or more lines by processing the binary image using the Hough transform algorithm (Fan paragraph 40: “Perform Hough line detection on the binarized edge image to obtain the straight line position where the puncture needle is located”); for each of the one or more lines, determining, in the binary image, a plurality of pixel points that correspond to the linear interventional device and are located on the line (Fan paragraph 0073: “Search for edge points in the binarized edge image”); and determining a line segment corresponding to the line based on the plurality of pixel points (Fan paragraph 0073: “connect the points on the straight line found in the previous step to form a straight line segment”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fu to incorporate the teachings of Fan to quickly locate the position of the puncture needle (Fan paragraph 0042) with strong stability and robustness (Fan paragraph 0070). Regarding claim 14, it is the corresponding system configured to execute the method claimed in claim 5. Therefore, Fu, modified by Fan, discloses the limitations of claim 14 as it does the limitations of claim 5. Claim(s) 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Fu in view of Wei et al. (CN109447067A). Regarding claim 6, Fu discloses the method of claim 1. However, Fu fails to explicitly disclose for each of the one or more line segments, generating a plurality of rotation images by rotating the target region; for each of the plurality of rotation images, determining a sum of pixel values on each row of the rotation image; determining a target row with a maximum sum among a plurality of rows in the plurality of rotation images and a rotation angle of the rotation image corresponding to the target row; and determining the corrected line segment based on the target row and the rotation angle. In the related art of Radon transform, Wei discloses generating a plurality of rotation images by rotating the target region (Wei paragraphs 0081, 0089: “Radon transform rotates the image coordinate axis by a corresponding angle…within the range of 0° to 360°”); for each of the plurality of rotation images, determining a sum of pixel values on each row of the rotation image (Wei paragraph 0090: “calculate the sum of the product of the integral intensity of the element and the number of rows of the element in the column direction”); determining a target row with a maximum sum among a plurality of rows in the plurality of rotation images and a rotation angle of the rotation image corresponding to the target row (Wei paragraph 0090: “The rotation angle value corresponding to the maximum value of the sum is the tilt angle β”); and determining the corrected line segment based on the target row and the rotation angle (Wei paragraph 0095: “Rotate the ticket image by -β to correct the ticket image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fu to incorporate the teachings of Wei to achieve orientation detection and correction with the entire range of angles from 0° to 360° and eliminate the interference of complex backgrounds and neighboring pixels of the orientation feature region on orientation detection (Wei paragraph 0094). Regarding claim 15, it is the corresponding system configured to execute the method claimed in claim 6. Therefore, Fu, modified by Wei, discloses the limitations of claim 15 as it does the limitations of claim 6. Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Fu and Wei in view of Chakraborty et al. (NPL "Detection of the nipple in mammograms with Gabor filters and the Radon transform"). Regarding claim 7, Fu, modified by Wei, discloses the method of claim 6. However, Fu and Wei fail to disclose obtaining a filtered image by filtering the target region using a Gabor filtering algorithm; and generating the plurality of rotation images by rotating the filtered image. In the related art of image processing, Chakraborty discloses obtaining a filtered image by filtering the target region using a Gabor filtering algorithm (Chakraborty page 81, left hand column (LHC), last paragraph: “a bank of 180 real Gabor filters is applied”); and generating the plurality of rotation images by rotating the filtered image (Chakraborty page 81, LHC, last paragraph: “followed by the Radon transform, to detect linear oriented tissue structures present in the given mammographic image”, where applying the Radon transform involves rotating the filtered image, as taught by Wei). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fu and Wei to incorporate the teachings of Chakraborty to extract oriented patterns with the best compromise between spatial and frequency localization (Chakraborty page 82, right hand column (RHC), first paragraph under 3.2 Application of Gabor filters for the extraction of oriented patterns in mammograms). Claim(s) 8, 10, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Fu in view of Wang et al. (US 2014/0187942 A1). Regarding claim 8, Fu discloses the method of claim 1, wherein the generating a fused ultrasound image of the first ultrasound image and the second ultrasound image based on one or more corrected line segments corresponding to the one or more line segments comprises: for each of the one or more corrected line segments, determining an image window corresponding to the corrected line segment from the second ultrasound image (Fu page 11, first paragraph under 3.4 Composite reinforcement of puncture needle: “After obtaining the final straight line through the above steps, a 15-pixel-wide strip region is extended from the straight line in SF”) based on an angle of the corrected line segment in the second ultrasound image (Fu Figure 4: “Determine the position and deflection angle of the puncture needle”); determining a target weight of the image window; and generating the fused ultrasound image by fusing the first ultrasound image and the image window of each corrected line segment based on the target weight (Fu page 3, first paragraph: “the needle body region is reconstructed and fused with tissue frames with certain weights”). However, Fu fails to disclose the target weight being associated with a probability that the corrected line segment corresponds to the linear interventional device. In the related art of ultrasound image enhancement, Wang discloses the target weight being associated with a probability that the corrected line segment corresponds to the linear interventional device (Wang paragraph 0086: “the weight is assigned based on the relative weight of the probability for the individual needle candidate in the component. Where the needle has a higher probability in one component frame, that component frame is weighted more heavily than other component frames”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fu to incorporate the teachings of Wang to improve the needle detection accuracy by utilizing needle detections from multiple ultrasound images (Wang paragraph 0019). Regarding claim 10, Fu, modified by Wang, discloses the method of claim 8, wherein the generating the fused ultrasound image by fusing the first ultrasound image and the image window of each corrected line segment based on the target weight comprises: for each corrected line segment, determining a sub-weight of each pixel point in the image window corresponding to the corrected line segment based on the target weight of the image window (Wang Equation 11, paragraphs 0049, 0088-0089: “the intensities or scalar values of the data along the selected segment or segments are increased…the increase is by a value that is an adaptive function of a magnitude and/or orientation of a response to the steerable filtering” in which the weight for each pixel is dependent on Mag(p) and Ori(p)); determining a weighted image window by processing each pixel point in the image window based on the sub-weight of each pixel point (Wang Equation 11: image I(p)); and generating the fused ultrasound image by fusing the first ultrasound image and the weighted image window of each corrected line segment (Wang Equation 9: ultrasound frame Ic(x)). Regarding claim 16, it is the corresponding system configured to execute the method claimed in claim 8. Therefore, Fu, modified by Wang, discloses the limitations of claim 16 as it does the limitations of claim 8. Regarding claim 18, it is the corresponding system configured to execute the method claimed in claim 10. Therefore, Fu, modified by Wang, discloses the limitations of claim 18 as it does the limitations of claim 10. Claim(s) 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Fu and Wang in view of Xu et al. (CN102920476A). Regarding claim 9, Fu, modified by Wang, discloses the method of claim 8, wherein the determining a target weight of the image window comprises: obtaining an initial weight of the image window (Fu page 3, first paragraph: “the needle body region is reconstructed and fused with tissue frames with certain weights”). However, Fu fails to disclose generating a plurality of vertical lines on the corrected line segment corresponding to the image window; determining whether pixel values of a plurality of pixel points on each of the plurality of vertical lines satisfy a first preset condition; in response to determining that the first preset condition is satisfied, determining the target weight of the image window based on the initial weight of the image window and a preset coefficient. In the related art of ultrasound image enhancement, Xu discloses generating a plurality of vertical lines on the corrected line segment corresponding to the image window (Xu Figure 13: vertical dotted lines); determining whether pixel values of a plurality of pixel points on each of the plurality of vertical lines satisfy a first preset condition (Xu paragraph 0120: “the composite region is projected along the vertical direction (the vertical dotted line direction shown in Figure 13)”); in response to determining that the first preset condition is satisfied, determining the target weight of the image window based on the initial weight of the image window and a preset coefficient (Xu paragraphs 0088, 0120: “the maximum gray value is obtained. The maximum gray value is mapped to a weight using the aforementioned formula (2)” where yhigh and ylow are the maximum and minimum values of the preset mapping value y, respectively). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Fu to incorporate the teachings of Xu to improve overlaying the images via weighting using a combination of composite reference values (Xu paragraph 0126). Regarding claim 17, it is the corresponding system configured to execute the method claimed in claim 9. Therefore, Fu, modified by Wang and Xu, discloses the limitations of claim 17 as it does the limitations of claim 9. Claim(s) 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Fu in view of Patton et al. (US 2020/0178927 A1). Regarding claim 11, Fu discloses the method of claim 1. However, Fu fails to explicitly disclose obtaining a third ultrasound image of the linear interventional device captured after the first ultrasound image and the second ultrasound image; determining whether the third ultrasound image satisfies a second preset condition; in response to determining that the third ultrasound image satisfies the second preset condition, generating a second fused image of the third ultrasound image and the second ultrasound image based on the corrected line segment corresponding to each of the one or more line segments; or in response to determining that the third ultrasound image does not satisfy the second preset condition, obtaining a fourth ultrasound image having a better imaging quality with respect to the linear interventional device than the third ultrasound image for generating the second fused image. In the related art of ultrasound image enhancement, Patton discloses obtaining a third ultrasound image of the linear interventional device captured after the first ultrasound image and the second ultrasound image (Patton paragraph 0046: “the ultrasound imaging system may determine that additional images are necessary to capture the interventional instrument. The ultrasound imaging system may determine a new steer angle for the additional images that may better depict the interventional instrument compared to the second set images”); determining whether the third ultrasound image satisfies a second preset condition (Patton paragraph 0039: “the ultrasound imaging system then determine which of needle frames 540, 541, and 542 are appropriate for determining the linear structure. In particular embodiments, the ultrasound imaging system determines which of the needle frames most prominently shows the presence of an interventional instrument by choosing the frame with the highest linear structure score”); in response to determining that the third ultrasound image satisfies the second preset condition, generating a second fused image of the third ultrasound image and the second ultrasound image based on the corrected line segment corresponding to each of the one or more line segments (Patton paragraph 0039: “the ultrasound imaging system may merge the composite tissue frame 530 with the enhanced linear structure 560 to generate a blended image 580, which depicts the interventional instrument and its relative position to the imaged anatomical structures”); or in response to determining that the third ultrasound image does not satisfy the second preset condition, obtaining a fourth ultrasound image having a better imaging quality with respect to the linear interventional device than the third ultrasound image for generating the second fused image (this limitation is disclosed in an alternative clause and thus, read only on the first limitation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Fu to incorporate the teachings of Patton to improve the frame rate of the ultrasound imaging by only acquiring additional needle visualization frames when necessary (Patton paragraph 0003). Regarding claim 19, it is the corresponding system configured to execute the method claimed in claim 11. Therefore, Fu, modified by Patton, discloses the limitations of claim 19 as it does the limitations of claim 11. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cheung et al. (NPL “Enhancement of needle visibility in ultrasound-guided percutaneous procedures”) discloses a needle enhancement algorithm by fusing the brightened needle in a steered image with an original image to produce an improved image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.Z./Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536695
TRENCH PROFILE DETERMINATION BY MOTION
2y 5m to grant Granted Jan 27, 2026
Patent 12524883
Systems and Methods for Assessing Cell Growth Rates
2y 5m to grant Granted Jan 13, 2026
Patent 12518391
SYSTEM AND METHOD FOR IMPROVING IMAGE SEGMENTATION
2y 5m to grant Granted Jan 06, 2026
Patent 12511900
System and Method for Impact Detection and Analysis
2y 5m to grant Granted Dec 30, 2025
Patent 12493946
APPARATUS AND METHOD FOR VERIFYING OPTICAL FIBER WORK USING ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+58.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month