Prosecution Insights
Last updated: April 19, 2026
Application No. 18/280,369

CAPILLARY ANALYSIS

Non-Final OA §102§103
Filed
Sep 05, 2023
Examiner
TSAI, TSUNG YIN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Odi Medical AS
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
804 granted / 984 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
1015
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 984 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of claims: claims 1, 3-4, 6-8, 10-19, 22, 24-25 and 27 are examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/26/2023 was filed and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, 17, 24, 25 and 27 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wang et al (US 2019/0362494). Claim 1: Wang et al (US 2019/0362494) anticipated the following subject matter: An automated method for analysing capillaries in a plurality of images acquired from a subject, the method comprising the following steps: a) acquiring the plurality of images (0009 teaches sequence of image (plurality of images) patches along a blood vessel/path (region of interest in images)); b) generating a plurality of capillary candidate maps for each of said images, each capillary candidate map comprising one or more regions of interest for each of said images, wherein for each image, each of the respective capillary candidate maps is generated by comparing said image to a different criterion (0009 teaches base on blood vessel condition parameters (different criterions) prediction of blood vessel path (candidate map) using neural network (deep learning method and recursive neural network); 0003 and step S104 teaches end to end mapping relation sequences of image patch for blood vessel path, where compare fixed features extraction and prediction for more accurate blood condition parameter on the blood vessel path; 0041 teaches further mapping sequences of images by means of geometric feature on the blood vessel path; 0047 teaches further detail of maps); c) combining said capillary candidate maps to generate a combined capillary candidate map (0060 teaches mapping blood vessel segment in different branches by overlapping (combining) paths (maps) to obtain multiple blood vessel path); d) using a first neural network to determine a respective location of one or more detected capillaries in said combined capillary candidate map (0038-0042, specifically 0038 and figure 2 teaches use of convolutional neural network for connecting patches of blood vessel path (map)); e) using a second neural network to determine an optical flow of said detected capillaries (0038-0042 and figures 2-3, specifically 0041-0042 teaches use of deep learning network for vector (optical flow) of the blood vessel path regarding direction); and f) extracting one or more capillary parameters using said detected capillaries and/or said determined flow (0003 teaches such extracted data would display condition of blood flow such as flow reserve (FFR), for example, including but not limited to blood pressure, blood flow and the like in the blood vessel, effectively aid a doctor to perform cardiovascular diagnosis). Claim 3 The method as claimed in claim 1, wherein the plurality of images form a video (0029 teaches data set of sequence of images (video is a sequence of images); 0048 teaches image data such as video). Claim 17: The method as claimed in claim 1, wherein the first neural network comprises a convolutional neural network (0038-0042, specifically 0038 and figure 2 teaches use of convolutional neural network for connecting patches of blood vessel path (map)), and wherein the second neural network comprises a deep neural network (0038-0042 and figures 2-3, specifically 0041-0042 teaches use of deep learning network for vector (optical flow) of the blood vessel path regarding direction). Claim 24: The method as claimed in claim 1, wherein the parameter comprises one or more of the group comprising: a) functional capillary density (number of capillaries per square millimetre); b) mean capillary distance - average distance of nearest-neighbour pairs of capillaries; c) capillary flow velocity (CFV) - either quantified in an ordinal scale or by a velocity (e.g. millimetre per second); d) the size of each capillary; e) the colour density of each capillary, which is related to the level of oxygenation of the red blood cells; and/or f) the blood area or blood volume - the area or estimated volume occupied by the capillaries in relation to the total area or volume (0023, 0029, 0059 teaches blood flow velocity; 0032 teaches individual blood vessels and center line size; 0032 teaches 3D image patch or estimate area/volume). Claim 25: Wang et al (US 2019/0362494) teaches the following subject matter: A device arranged to carry out automated analysis of capillaries in a plurality of images acquired from a subject, the device comprising: an image acquisition module arranged to acquire the plurality of images (0009 teaches sequence of image (plurality of images) patches along a blood vessel/path (region of interest in images)); and a processing module arranged to (figures 1A and 1B and 0024 teaches use of processor): generate a plurality of capillary candidate maps for each of said images, each capillary candidate map comprising one or more regions of interest for each of said images, wherein for each image, each of the respective capillary candidate maps is generated by the processing module by comparing said image to a different criterion (0009 teaches base on blood vessel condition parameters (different criterions) prediction of blood vessel path (candidate map) using neural network (deep learning method and recursive neural network); 0003 and step S104 teaches end to end mapping relation sequences of image patch for blood vessel path, where compare fixed features extraction and prediction for more accurate blood condition parameter on the blood vessel path; 0041 teaches further mapping sequences of images by means of geometric feature on the blood vessel path; 0047 teaches further detail of maps); combine said capillary candidate maps to generate a combined capillary candidate map (0060 teaches mapping blood vessel segment in different branches by overlapping (combining) paths (maps) to obtain multiple blood vessel path); use a first neural network to determine a respective location of one or more detected capillaries in said combined capillary candidate map (0038-0042, specifically 0038 and figure 2 teaches use of convolutional neural network for connecting patches of blood vessel path (map)); use a second neural network to determine an optical flow of said detected capillaries (0038-0042 and figures 2-3, specifically 0041-0042 teaches use of deep learning network for vector (optical flow) of the blood vessel path regarding direction); and extract one or more capillary parameters using said detected capillaries and/or said determined flow (0003 teaches such extracted data would display condition of blood flow such as flow reserve (FFR), for example, including but not limited to blood pressure, blood flow and the like in the blood vessel, effectively aid a doctor to perform cardiovascular diagnosis). Claim 27: A non-transitory computer-readable medium comprising instructions that, when executed by a processor, cause the processor to carry out the method of claim 1 (0010 teaches using non-transitory computer readable medium). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (US 2019/0362494) in view of Umezawa (US 2017/0055842). Claim 4: Wang et al teaches all the subject matter above, but the following is taught by Umezawa: The method as claim 1, wherein the plurality of images comprise microscopy images and wherein the step of acquiring the images comprises using a microscope probe to generate said images (0044 teaches use of photoacoustic microscope probe 30 to generate and display image on the display, where 0081 detail the scanning by means of position and light probe that is applied on processing image such as blood vessel detail in 0093). Wang et al and Umezawa are both in the field of image analysis, especially imaging of blood vessel as the object/region of interest (Umezawa teaches in figure 3 and 0056-0058) to assess the condition/diagnosis of the object (Umezawa teaches in 0004) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by Umezawa where the use of microscope probe improve image accuracy as disclosed in 0079-0081. Claim 22: Wang et al teaches all the subject matter above, but the following is taught by Umezawa: The method as claimed in claim 1,further comprising performing quality analysis on one or more of the plurality of images to determine whether said images meet a quality threshold (0007 teaches characteristic of image such as noise and reconstruction artifact for image quality is calculated and considered). Wang et al and Umezawa are both in the field of image analysis, especially imaging of blood vessel as the object/region of interest (Umezawa teaches in figure 3 and 0056-0058) to assess the condition/diagnosis of the object (Umezawa teaches in 0004) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by Umezawa such the image elements of the living organism image quality is calculated as disclosed by Umezawa in 0007 Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (US 2019/0362494) in view of ZENG (US 2011/0081057): Claim 6: Wang et al teaches all the subject matter above, but the following is taught by ZENG: The method as claimed in any preceding claim 1,further comprising carrying out one or more of: a) modifying a colour balance of one or more of said images; b) modifying a white balance of one or more of said images; c) modifying a light level of one or more of said images; d) modifying a gamma level of one or more of said images; e) modifying a red-green-blue (RGB) curve of one or more of said images; f) applying a sharpening filter to one or more of said images; and/or g) applying a noise reduction process to one or more of said images (0003 detail color consideration for series of images; 0041 teaches compensation with noise reduction regarding different image structure of blood vessel). Wang et al and ZENG are both in the field of image analysis, especially in imaging of blood vessels such that the quantitative analysis for blockage (condition) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by ZENG where parameters listed are consider to getter assess the condition/diagnosis of the blood vessel as disclosed by ZENG in 0003 Claim 7: Wang et al teaches all the subject matter above, but the following is taught by ZENG: The method as claimed in claim 1, further comprising carry out a motion compensation process (0046 teaches motion compensation or structure in tracked sequence of images). Wang et al and ZENG are both in the field of image analysis, especially in imaging of blood vessels such that the quantitative analysis for blockage (condition) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by ZENG where parameters listed are consider to getter assess the condition/diagnosis of the blood vessel as disclosed by ZENG in 0003 Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (US 2019/0362494) in view of TYAN et al (US 2022/0125280). Claim 15: Wang et al teaches all the subject matter above, but the following is taught by TYAN et al: The method as claimed in claim 1,wherein the plurality of capillary candidate maps are processed using a non-max suppression process to replace overlapping regions of interest (paragraph 0152 use non-max suppression for positive anchors for object of interest and refines the location and size with overlap the highest foreground (region of interest), where such method is used to identify soft tissue such as blood vessel as detail in 0166). Wang et al and TYAN et al are both in the field of image analysis, especially in imaging of blood vessels such that the quantitative analysis for blockage (condition) such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by TYAN et al such the use of non-max suppression output higher confidence score to contain object of interest with refine location and size as disclosed by TYAN et al in 0152. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (US 2019/0362494) in view of Xu et al (US 2012/0213423). Claim 16: Wang et al teaches all the subject matter above, but the following is taught by Xu et al: The method as claimed in claim 1,further comprising generating a validated training data set by manually labelling a plurality of capillaries in a plurality of images and supplying said validated training data set to the first neural network during a training phase (0032 teaches manually label vessel reference for training, where Abstract detail this use for blood vessel segmentation for accurate vessel pattern analysis further employing machine learning). Wang et al and Xu et al are both in the field of image analysis, especially in imaging of blood vessels such that the quantitative analysis for condition such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by Xu et al such that the training with machine learning can identify blood vessel for early diagnosis and monitoring or progression of glaucoma and other retinal diseases. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (US 2019/0362494) in view of Ma et al (US 2019/0159683). Claim 18: Wang et al teaches all the subject matter above, but the following is taught by Ma et al: The method as claimed in claim 1, wherein the step of determining the optical flow of the detected capillaries comprises applying a Gunnar Farneback algorithm to the detected capillaries prior to use of the second neural network (figure 2 step 204 and 0030 teaches using Gunnar Farnebank for provide tracking accuracy and speed for region of interest (ROI) such as blood vessel). Wang et al and Ma et al are both in the field of image analysis, especially in imaging of blood vessels such that the quantitative analysis for condition such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by Ma et al regarding using Gunner Farneback for tracking the target blood vessel across the plurality of angiography images for relative high contrast of the vessel, a tracked envelope boundary is determined to keep the vessel of interest as disclosed by Ma et al in 0030. One ordinary in the art will use such method with or without, or before or after neural network since using of neural network is computing and energy intensive. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (US 2019/0362494) in view of Beckers et al (US 2016/0338613). Claim 19: The method as claimed in claim 1, wherein a respective velocity vector value for each detected capillary is compared to a velocity vector value threshold and wherein only capillaries having a velocity vector value above the velocity vector value threshold are passed to the second neural network (figure 8 and step 806 and 0221 teaches vector velocity above threshold are additional bin to be process for further analysis, where 0199 detail where the use of neural network perform the refinement (or passed on to a neural network) and 0193). Wang et al and Beckers et al are both in the field of image analysis, especially in imaging of blood vessels such that the quantitative analysis for condition such that the combine outcome is predictable. Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Wang et al by Beckers et al regarding velocity vector for human assessment of human venous blood flow as disclosed by Beckers et al in 0031. Allowable Subject Matter Claim 8, and following dependent claims 10-14, are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. At the time of examination unable to find the claim limitations, integrating elements, concept or language recited in claim 8 as a whole. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Al-Kofahi et al (US 2017/0109880) teaches SYSTEM AND METHOD FOR BLOOD VESSEL ANALYSIS AND QUANTIFICATION IN HIGHLY MULTIPLEXED FLUORESCENCE IMAGING – abstract teaches image data corresponding to multi-channel multiplexed image of a fluorescently stained biological tissue manifesting expression levels of a primary marker and at least one auxiliary marker of blood vasculature, and extracting features of blood vessels using the primary marker as an input to create a single channel segmentation of the blood vessels Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TSUNG YIN TSAI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
Oct 08, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597118
IMAGE INSPECTION APPARATUS, IMAGE INSPECTION METHOD, AND IMAGE INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597237
INFERENCE LEARNING DEVICE AND INFERENCE LEARNING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12579797
VIDEO PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12573029
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
2y 5m to grant Granted Mar 10, 2026
Patent 12567235
Visual Explanation of Classification
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+10.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 984 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month