Prosecution Insights
Last updated: April 19, 2026
Application No. 18/631,562

VISUALIZATION OF DEPTH AND POSITION OF BLOOD VESSELS AND ROBOT GUIDED VISUALIZATION OF BLOOD VESSEL CROSS SECTION

Non-Final OA §103§112
Filed
Apr 10, 2024
Examiner
SURGAN, ALEXANDRA L
Art Unit
3799
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Koninklijke Philips N V
OA Round
5 (Non-Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
74%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
229 granted / 490 resolved
-23.3% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
43 currently pending
Career history
533
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 490 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/19/2026 has been entered. Status of Claims Applicant’s amendments filed 10/09/2025 have been entered. Claims 1-7 and 9-20 are pending and currently under consideration for patentability under 37 CFR 1.104 Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 14, 17, and all dependent claims thereof are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant has amended independent claims 1, 14, and 17 to each recite, “generate a virtual image which is registered to the real-time images, which indicates the at least one structure located below the surface of the anatomical target based on the 3D model, and which shows an internal view of th3e at least one structure located below the surface of the anatomical target base don’t he 3D model.” It is noted Applicant did not indicate where the new language is supported in the specification. After a cursory search, it appears the language constitutes new matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7, 8, 11, 14, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Manzke et al. (WO 2012/172474) in view of Higgins et al. (U.S. 2009/0156895) and Lee et al. (U.S. 2015/0141814). Higgins et al. incorporates Method for Continuous Guidance of Endoscopy at paragraphs [0165],[0179] With respect to claim 1, Manzke et al. teaches a system for visualizing an anatomical target, the system comprising: an imaging device (120) configured to collect real-time images of the anatomical target; a three-dimensional model (111) generated from pre-operative images or intra-operative images of at least one structure located below a surface of the anatomical target, such that the at least one structure is not visible in the real-time images from the imaging device (page 7, lines 7-16; page 8 lines 3-6); an image processing module configured to: generate an overlay from the 3D model registered to the real-time images, wherein the overlay indicates the at least one structure in the anatomical target (page 8, lines 7-15; page 10 line 18-page 11 line 17, FIG. 9). However, Manzke et al. does not explicitly teach indicating a depth of the structures below the surface. Manzke et al. further does not teach generating a virtual image to show an internal view of the at least one blood vessel. With respect to claim 1, Higgins et al. teaches a system for visualizing an anatomical target, the system comprising: an image processor (para [0132]) configured to compute a depth of at least one structure relative to the at least a 3D model and to generate an overlay from the 3D model registered to real-time images (para [0115] of Higgins et al. see also FIG. 1 of Method for Continuous Guidance of Endoscopy), wherein the overlay indicates the at least one structure located below the surface of the anatomical target and indicates the depth of the at least one structure below the surface of the anatomical target (see paragraphs [0113]-[0118] of Higgins et al., see also FIG. 2 of Method for Continuous Guidance of Endoscopy). With respect to claim 1, Lee et al. teaches a system for visualizing internal anatomy of an anatomical target configured to generate a virtual image (see para [0095], [0161], [0164]-[0169], [0178]-[0180]) showing an internal view of the at least one structure below the surface located below the surface of the anatomical region (FIG. 8B); and a display (130) device configured to display the virtual image Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of invention to utilize the depth indication of Higgins et al. in the system of Manzke et al. in order to provide the user with additional cues that convey obstacle locations and ROI depths of sample so that the physician can freely navigate in the virtual world, perceiving the depth of sample and possible obstacle locations at any pose orientation (para [0115] of Higgins et al.). Further, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify Manzke et al. to include the visualization showing an internal view of the blood vessel as taught by Lee et al. in order to provide a means of accurately measuring the degree of stenosis in a body lumen (para [0003] of Lee et al.). With respect to claim 2, Higgins et al. teaches the image processor is further configured to indicate the depth of the at least one structure located below the surface of the anatomical target by one of a color, texture or size of the structure rendered (para [0114]). With respect to claim 3, Higgins et al. teaches the image processor is further configured to indicate the depth of the at least one structure located below the surface of the anatomical target by a color gradient where color intensity is proportional to depth (para [0114]). With respect to claim 7, Manzke et al. teaches an image guidance module configured to robotically guide the imaging device along a path corresponding to the structure below the surface of the anatomical target (FIG. 10). With respect to claim 9, Lee et al. teaches the image processor is further configured to generate at least one of (i) a virtual 3D fly-through image of the at least one structure below the surface located below the surface of the anatomical region and (ii) a virtual 3D cross-section image of the at least one structure below the surface located below the surface of the anatomical region; and the display device is further configured to display (i) the virtual 3D fly- through image of the at least one structure below the surface located below the surface of the anatomical region and (ii) the virtual 3D cross-section image of the at least one structure below the surface located below the surface of the anatomical region (FIG. 8B). With respect to claim 10, Lee et al. teaches the image processor is further configured to: receive a selection of a point on the at least one structure below located below the surface of the anatomical region, and generate an internal view of the at least one structure at the selected point based on the 3D model, wherein the internal view includes at least one of (i) a virtual 3D fly-through image of the at least one structure at the selected point and (ii) a virtual 3D cross-section image of the at least one structure at the selected point; and the display device further configured to display the internal view of the at least one structure (FIG. 8B). With respect to claim 11, Higgins et al. teaches the at least one structure is at least one blood vessel (aorta, para [0117]). With respect to claim 14, Manzke et al. teaches a method for visualizing an anatomical target, the system comprising: collecting, by an imaging device (120), real-time images of the anatomical target; providing a three-dimensional model (111) generated from pre-operative images or intra-operative images of at least one structure located below a surface of the anatomical target, such that the at least one structure is not visible in the real-time images from the imaging device (page 7, lines 7-16; page 8 lines 3-6); generating an overlay from the 3D model registered to the real-time images, wherein the overlay indicates the at least one structure in the anatomical target (page 8, lines 7-15; page 10 line 18-page 11 line 17, FIG. 9). However, Manzke et al. does not explicitly teach indicating a depth of the structures below the surface. Manzke et al. further does not teach generating a virtual image to show an internal view of the at least one blood vessel. With respect to claim 14, Higgins et al. teaches a method for visualizing an anatomical target, the system comprising: providing a three-dimensional model generated from at least one of pre-operative images or intra-operative images of at least one structure located below a surface of the anatomical target, such that the at least one structure is not visible in real-time images (para [0113]-[0118] for example); computing a depth of the at least one structure relative to the anatomical target using at least the 3D model (para [0113]-[0118] for example); and generating an overlay from the 3D model registered to real-time images (para [0115] of Higgins et al. see also FIG. 1 of Method for Continuous Guidance of Endoscopy), which indicates the at least one structure located below the surface of the anatomical target and indicates the depth of the at least one structure below the surface of the anatomical target (see paragraphs [0113]-[0118] of Higgins et al., see also FIG. 2 of Method for Continuous Guidance of Endoscopy). With respect to claim 14, Lee et al. teaches a method for visualizing an anatomical target, the method comprising generating a virtual image (see para [0095], [0161], [0164]-[0169], [0178]-[0180]) showing an internal view of the at least one structure below the surface located below the surface of the anatomical region (FIG. 8B); and displaying (130) the virtual image (para [0181]). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of invention to utilize the depth indication of Higgins et al. in the system of Manzke et al. in order to provide the user with additional cues that convey obstacle locations and ROI depths of sample so that the physician can freely navigate in the virtual world, perceiving the depth of sample and possible obstacle locations at any pose orientation (para [0115] of Higgins et al.). Further, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify Manzke et al. to include the visualization showing an internal view of the blood vessel as taught by Lee et al. in order to provide a means of accurately measuring the degree of stenosis in a body lumen (para [0003] of Lee et al.). With respect to claim 15, Lee et al. teaches generating at least one of (i) a virtual 3D fly-through image of the at least one structure below the surface located below the surface of the anatomical region and (ii) a virtual 3D cross-section image of the at least one structure below the surface located below the surface of the anatomical region as the internal view; and displaying (i) the virtual 3D fly-through image of the at least one structure below the surface located below the surface of the anatomical region and (ii) the virtual 3D cross-section image of the at least one structure below the surface located below the surface of the anatomical region. (FIG. 8B) With respect to claim 17, Manzke et al. teaches a non-transitory computer-readable storage medium having stored a computer program comprising instructions, which, when executed by at least one processor, cause the at least one processor to: control an imaging device (120) to collect real-time images of the anatomical target; generate a three-dimensional model (111) from pre-operative images or intra-operative images of at least one structure located below a surface of the anatomical target, such that the at least one structure is not visible in the real-time images from the imaging device (page 7, lines 7-16; page 8 lines 3-6); generate an overlay from the 3D model registered to the real-time images, wherein the overlay indicates the at least one structure in the anatomical target (page 8, lines 7-15; page 10 line 18-page 11 line 17, FIG. 9). However, Manzke et al. does not explicitly teach indicating a depth of the structures below the surface. Manzke et al. further does not teach generating a virtual image to show an internal view of the at least one blood vessel. With respect to claim 17, Higgins et al. teaches a non-transitory computer-readable storage medium having stored a computer program comprising instructions, which, when executed by at least one processor, cause the at least one processor to: generate a three-dimensional model from at least one of pre-operative images or intra-operative images of at least one structure located below a surface of the anatomical target, such that the at least one structure is not visible in real-time images (para [0113]-[0118] for example); compute a depth of the structure relative to the anatomical target using at least the 3D model (para [0113]-[0118] for example) generate an overlay from the 3D model registered to real-time images (para [0115] of Higgins et al. see also FIG. 1 of Method for Continuous Guidance of Endoscopy), wherein the overlay indicates the at least one structure located below the surface of the anatomical target and indicates the depth of the at least one structure below the surface of the anatomical target (see paragraphs [0113]-[0118] of Higgins et al., see also FIG. 2 of Method for Continuous Guidance of Endoscopy). With respect to claim 17, Lee et al. teaches a non-transitory computer-readable storage medium having stored a computer program comprising instructions, which, when executed by at least one processor, cause the at least one processor to: generate a virtual image (see para [0095], [0161], [0164]-[0169], [0178]-[0180]) showing an internal view of the at least one structure below the surface located below the surface of the anatomical region (FIG. 8B); and display (130) the virtual image (para [0181]). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of invention to utilize the depth indication of Higgins et al. in the system of Manzke et al. in order to provide the user with additional cues that convey obstacle locations and ROI depths of sample so that the physician can freely navigate in the virtual world, perceiving the depth of sample and possible obstacle locations at any pose orientation (para [0115] of Higgins et al.). Further, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify Manzke et al. to include the visualization showing an internal view of the blood vessel as taught by Lee et al. in order to provide a means of accurately measuring the degree of stenosis in a body lumen (para [0003] of Lee et al.). With respect to claim 20, Lee et al. teaches the image processor is further configured to: receive a selection of a point on the at least one structure below located below the surface of the anatomical region, wherein the internal view includes at least one of (i) a virtual 3D fly-through image of the at least one structure at the selected point and (ii) a virtual 3D cross-section image of the at least one structure at the selected point (FIG. 8B). Claims 4, 6, 12, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Manzke et al. (WO 2012/172474) in view of Higgins et al. (U.S. 2009/0156895) and Lee et al. (U.S. 2015/0141814) as applied to claims 1, 14, and 17 above above and further in view of Tanaka (U.S. 2016/0038004). Manzke et al. in view of Higgins et al. teaches indicating the depth of structures below the surface. However, Manzke et al. in view of Higgins et al. does not teach indicating the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images. With respect to claim 4, Tanaka teaches an image processor which is configured to indicate the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images (para [0072]). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify the image processor of Manzke et al. in view of Higgins et al. to further include indicating the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images as taught by Tanaka because this makes it possible to accurately acquire the distance between the treatment tool and the specific part (i.e., information that also includes the distance in the depth direction) as the degree of closeness, and notify the user of the acquired distance as more accurate information about the degree of relation (para [0076] of Tanaka). With respect to claim 6, Tanaka teaches the image processor is further configured to indicate the depth of the at least one structure located below the surface of the anatomical region within a shaped area in a vicinity of a tool tip (FIG. 11). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify the image processor of Manzke et al. in view of Higgins et al. to further include indicating the depth of the at least one structure located below the surface of the anatomical region within a shaped area in a vicinity of a tool tip as taught by Tanaka to more accurately notify the user of the degree of relation when a plurality of specific parts are present in different directions when viewed from the treatment tool 210, or when a plurality of specific parts overlap each other in the depth direction (para [0122] of Tanaka). With respect to claim 12, Tanaka teaches the image processor is further configured to indicate the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images, wherein the tool is not the imaging device (para [0072]). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify the image processor of Manzke et al. in view of Higgins et al. to further include indicating the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images, wherein the tool is not the imaging device as taught by Tanaka because this makes it possible to accurately acquire the distance between the treatment tool and the specific part (i.e., information that also includes the distance in the depth direction) as the degree of closeness, and notify the user of the acquired distance as more accurate information about the degree of relation (para [0076] of Tanaka). With respect to claim 16, Tanaka teaches indicating the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images (para [0072]). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify the image processor of Manzke et al. in view of Higgins et al. to further include indicating the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images as taught by Tanaka because this makes it possible to accurately acquire the distance between the treatment tool and the specific part (i.e., information that also includes the distance in the depth direction) as the degree of closeness, and notify the user of the acquired distance as more accurate information about the degree of relation (para [0076] of Tanaka). With respect to claim 18, Tanaka teaches an image processor which is configured to indicate the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images (para [0072]). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify the image processor of Manzke et al. in view of Higgins et al. to further include indicating the depth of the at least one structure located below the surface of the anatomical region relative to a position of a tool in the real-time images as taught by Tanaka because this makes it possible to accurately acquire the distance between the treatment tool and the specific part (i.e., information that also includes the distance in the depth direction) as the degree of closeness, and notify the user of the acquired distance as more accurate information about the degree of relation (para [0076] of Tanaka). Claim 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Manzke et al. (WO 2012/172474) in view of Higgins et al. (U.S. 2009/0156895) and and Lee et al. (U.S. 2015/0141814) as applied to claims 1 above and further in view of Higgins et al. (U.S. 2008/0207997). Manzke et al. in view of Higgins et al. teaches a system as set forth above. However, Manzke et al. in view of Higgins et al. does not teach in response to a cursor over the overlay, indicating the depth of the at least one structure located below the surface of the anatomical region by an alphanumeric label indicating the depth. With respect to claim 5, Higgins et al. teaches an image processor configured to, in response to a cursor over the overlay, indicate the depth of the at least one structure located below the surface of the anatomical region by an alphanumeric label indicating the depth (para [0033] number 3). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to further include in response to a cursor over the overlay, indicating the depth of the at least one structure located below the surface of the anatomical region by an alphanumeric label indicating the depth as taught by Higgins et al. in order to provide the physician with additional guidance information (para [0033] of Higgins et al.). Claims 13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Manzke et al. (WO 2012/172474) in view of Higgins et al. (U.S. 2009/0156895) and Lee et al. (U.S. 2015/0141814) as applied to claims 1 and 17 above and further in view of Popovic (WO 2012/035492). Manzke et al. in view of Higgins et al. teaches a system as set forth above. However, Manzke et al. in view of Higgins et al. does not teach an image guidance processor configure to robotically guide the imaging device along a path corresponding to the at least one strucure. With respect to claim 13, Popovic teaches an image guidance module comprising a robot and wherein the image guidance processor is further configured to control the robot to robotically guide the imaging device along the path corresponding to the at least one structure located below the surface of the anatomical target. (9:1-10). With respect to claim 19, Popovic teaches an analogous non-transitory computer-readable storage medium configured to cause at least one processor to robotically guide the imaging device along a path corresponding to the at least one structure (8:29-9:10). Therefore, it would have been prima facie obvious to one of ordinary skill in the art at the time of the effective filing date to modify Manzke et al in view of Higgins et al. to further include an image guidance processor configure to robotically guide the imaging device in the manner taught by Popovic in order to prevent issues that cause handling errors, prolonging the surgery, or causing misidentification of in vivo structures (2:13-14 of Popovic). Response to Arguments Applicant's arguments filed 12/19/2025 have been fully considered but they are not persuasive. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. On page 9 Applicant argues nowhere does Manzke teach a 3D model as in claim 1 registered to real-time images. This is not persuasive at least because Applicant has not provided any arguments with respect to how specifically the teaching of page 7 lines 7-16, page 8 lines 3-6, page 8 lines 7-15, or page 10 line 18-page 11 line 17 fail to teach a 3d model registered to a real-time image. On page 9 Applicant argues Lee teaches nothing about registration of the 3d medical image with any real-time images. This is not persuasive at least because Manzke and Higgins already teach this limitation. On page 10 Applicant argues there is no 3d model in the portions of Higgins cited in the rejections. This is not persuasive. Paragraph [0115] of Higgins teaches image registration between a 3d model and a real-time image. On page 10 Applicant argues neither Manzke nor Higgins have a 3d model registered to the real-time images. Examiner respectfully disagrees. Both Manzke and Higgins teach a 3d model registered to a real time image as set forth above. On page 11 Applicant argues “[t]he overlay map in MANZKE is not a virtual image in the context taught in the instant application. HIGGINs registers a pre-computed virtual image to a real-time image, but the pre-computed virtual image in HIGGINS is also not a virtual image as in claim 1. And LEE is not concerned with registration.” One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexandra Newton Surgan whose telephone number is (571)270-1618. The examiner can normally be reached Monday-Friday 8am-4pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Carey can be reached at (571) 270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDRA L NEWTON/Primary Examiner, Art Unit 3799
Read full office action

Prosecution Timeline

Apr 10, 2024
Application Filed
Jan 07, 2025
Non-Final Rejection — §103, §112
Feb 17, 2025
Response Filed
Mar 05, 2025
Final Rejection — §103, §112
May 02, 2025
Response after Non-Final Action
Jul 10, 2025
Request for Continued Examination
Jul 14, 2025
Response after Non-Final Action
Aug 04, 2025
Non-Final Rejection — §103, §112
Oct 09, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103, §112
Dec 19, 2025
Response after Non-Final Action
Feb 19, 2026
Request for Continued Examination
Mar 05, 2026
Response after Non-Final Action
Mar 18, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593958
ENDOSCOPE
2y 5m to grant Granted Apr 07, 2026
Patent 12564310
ENDOSCOPE AND ENDOSCOPE APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12544160
CONTINUUM INSTRUMENT AND SURGICAL ROBOT
2y 5m to grant Granted Feb 10, 2026
Patent 12539022
ARTICULATION CONTROL DEVICE AND METHODS OF USE
2y 5m to grant Granted Feb 03, 2026
Patent 12533263
Ear Cleaning Arrangement
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
74%
With Interview (+27.5%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 490 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month