Prosecution Insights
Last updated: April 19, 2026
Application No. 18/440,436

AI-Powered Surgical Video Analysis

Non-Final OA §102§103
Filed
Feb 13, 2024
Examiner
HYTREK, ASHLEY LYNN
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Regents of the University of Michigan
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
74 granted / 83 resolved
+27.2% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
95
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
51.0%
+11.0% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 83 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/22/2024 has been made record of and considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-10, 12-13, 15-18, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Freytag (US 2025/0014344 A1). Consider claims 1, 12, and 20, Freytag discloses a computer system/method for using machine learning to analyze a surgical video and assess performance of a surgeon conducting a surgical procedure (¶6; “a method for giving feedback on a surgery and a corresponding feedback system”), comprising: [Claim 20: A non-transitory computer-readable storage medium storing executable instructions that, when executed by a processor (¶36, 101, 182, FIG. 1), cause a computer to:] one or more processors (¶36); and a memory comprising instructions, that when executed (¶36, 182), cause the computer system to: receive surgical video data including one or more images capturing at least a portion of an ophthalmic surgical procedure from a user device (¶197, 200; “For providing a feedback to a surgeon, who has conducted a surgery, the device 2 may upload video data to the processing device 6 directly or via the data base 4. The video data may… comprise multiple still images, i.e., frames, from the surgery.”); process the surgical video data using one or more trained assessment machine learning models to generate one or more assessment metrics (¶202-207; “In step S2, temporal and/or spatial semantic video segmentation, object detection, object tracking and/or anomaly detection may be performed… Evaluating the analyzed video data includes detecting at least one event of interest within the video data and/or deriving at least one score from the at least one event of interest… analyzing the video data S2 and/or evaluating the analyzed video data S3 can be carried out using a machine learning algorithm.”), wherein: the one or more trained assessment machine learning models are trained using historical ophthalmic surgery data (¶207; “For example, video data, analysis results and/or evaluation results from previous surgeries may be used as training data sets.”); and the one or more assessment metrics include one or more of a surgical instrument metric (¶77, 206; “number of enters/exits of instruments in the eye”), a surgical phase metric (¶77, 205-206; “an event of interest may be for example a specific surgery phase”; length of phase), or an anterior capsulotomy metric (¶37, 77, 95; capsulorrhexis, incision attempts); generate a performance assessment of the surgeon based upon at least the one or more assessment metrics (¶78, 206, 208; “After the evaluation, an evaluation result may be output in step S4”; ¶215; “the evaluation result may be used for different purposes, all of them giving the surgeon or user a feedback on a surgery.”); and provide the performance assessment of the surgeon to the user device (¶208; “After the evaluation, an evaluation result may be output in step S4 and may be for example displayed on the display unit 12.”; ¶198; “display unit 12 may be… user input device, such as a touchpad, and/or may be any kind of user end device, such as a tablet or smartphone.). Consider claims 2 and 13, Freytag discloses the claimed invention wherein the historical ophthalmic surgery data includes one or more images indicating one or more of an instrument presence, an instrument identification (¶50), an instrument color, an instrument material, a surgical step identification, a surgical phase identification, a capsulorrhexis identification (¶37), a limbus identification, a pupil identification (¶38, 77), a purkinje image identification, an anatomical landmark identification (¶38), or an anatomical change identification (¶203-207). Consider claim 3, Freytag discloses the claimed invention wherein the surgical instrument metric includes one or more of instrument ordering, instrument location, or instrument duration (¶203-207). Consider claim 5, Freytag discloses the claimed invention wherein the surgical phase metric includes one or more of a surgical step order or surgical step duration (¶77, 130, 141, 147, 203-207). Consider claims 6 and 15, Freytag discloses the claimed invention wherein: the surgical phase metric includes one or more of a surgical step order or surgical step duration (¶77, 130, 141, 147, 203-207); and the memory comprises further instructions that, when executed (¶179-182), cause the system to: generate the surgical phase metric using a trained surgical phase assessment machine learning model, the trained surgical phase assessment machine learning model trained using one or more of the historical ophthalmic surgery data or phase subset data, wherein the phase subset data includes one or more images indicating one or more of a surgical step identification or a surgical phase identification (¶17-18, 46, 60, 71, 77, 92, 105, 130, 143, 205-207). Consider claim 7, Freytag discloses the claimed invention wherein the anterior capsulotomy metric includes one or more of a capsulorrhexis size, a capsulorrhexis centration, a capsulorrhexis eccentricity, a capsulorrhexis circularity, a capsulorrhexis smoothness, or a fluidity of a rhexis formation (¶37, 77, 102). Consider claims 8 and 16, Freytag discloses the claimed invention wherein: the anterior capsulotomy metric includes one or more of a capsulorrhexis size, a capsulorrhexis centration, a capsulorrhexis eccentricity, a capsulorrhexis circularity, a capsulorrhexis smoothness, or a fluidity of a rhexis formation (¶37, 61, 73, 77, 102); and the memory comprising further instructions that, when executed (¶179-182), cause the system to: generate the anterior capsulotomy metric using a trained anterior capsulotomy assessment machine learning model, the trained anterior capsulotomy assessment machine learning model trained using one or more of the historical ophthalmic surgery data or capsulotomy subset data, wherein the capsulotomy subset data includes one or more images indicating one or more of a capsulorrhexis identification, a limbus location, a purkinje image location, an anatomical landmark, or an anatomical change (¶17-18, 37-38, 62-63, 73, 77, 92, 95, 102, 203-207). Consider claims 9 and 17, Freytag discloses the claimed invention, the memory comprising further instructions that, when executed, cause the system to: process the surgical video data using a trained semantic segmentation machine learning model to generate a semantic segmentation subset data including one or more images indicating a semantic segmentation of a capsulorrhexis, the trained semantic segmentation machine learning model trained using historical semantic segmentation data including at least one or more images of a classified capsulorrhexis (¶37, 95, 203-207); and process one or more of the semantic segmentation subset data or the surgical video data using the one or more trained assessment machine learning models to generate one or more assessment metrics (¶73, 77, 92, 95, 102, 203-207). Consider claims 10 and 18, Freytag discloses the claimed invention wherein the performance assessment includes one or more of a skill level assessment, a phase of surgery duration assessment, a surgical quality assessment, a skill progression assessment, an anterior capsulotomy assessment, a board certification assessment, a credentialing assessment, a pay-for-performance assessment, or an early warning assessment (¶66, 77, 205-206, 216-217). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Freytag as applied to claims 1-3, 5-10, 12-13, 15-18, and 20 above, and further in view of Zang (‘An Extremely Fast and Precise CNN for Recognition and Localization of Cataract Surgical Tools’, from 05/22/2024 IDS). Consider claims 4 and 14, Freytag discloses the claimed invention wherein: the surgical instrument metric includes one or more of instrument ordering, instrument location, or instrument duration (Freytag ¶203-207); and the memory comprises further instructions that, when executed (Freytag ¶179-182), cause the system to: generate the surgical instrument metric using a trained surgical instrument assessment machine learning model (Freytag ¶95), the trained surgical instrument assessment machine learning model trained using one or more of the historical ophthalmic surgery data or surgical instrument subset data (Freytag ¶207), wherein the surgical instrument subset data includes one or more images indicating one or more of an instrument presence, an instrument identification, an instrument color, or an instrument material (Freytag ¶17-18, 37-38, 62-63, 203). In related art, Zang further supports generat[ing] the surgical instrument metric using a trained surgical instrument assessment machine learning model, the trained surgical instrument assessment machine learning model trained using one or more of the historical ophthalmic surgery data or surgical instrument subset data, wherein the surgical instrument subset data includes one or more images indicating one or more of an instrument presence, an instrument identification, an instrument color, or an instrument material (Zang FIG. 6, Abstract, Sections 2 and 4). Zang discloses EF-PNet for tool detection, which performs well in both intraoperative tracking and postoperative skill evaluation (Zang Abstract). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the trained surgical instrument assessment machine learning model of Zang into the surgical feedback system of Freytag to yield the predictable result of improved performance of instrument-based metric generation. Freytag further notes that “The localization of the tools, i.e., the object detection, may be combined with the semantic temporal frame segmentation. Since specific tools are only present during certain parts of a surgery, this might be helpful to distinguish between the different phases (Freytag ¶57).” Freytag further discloses that evaluating the analyzed video data includes deriving at least one score for an event of interest, which may be a presence of a tool (Freytag ¶62-63). Claims 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Freytag as applied to claims 1-3, 5-10, 12-13, 15-18, and 20 above, and further in view of Wolf (US 2020/0273548 A1). Consider claims 11 and 19, Freytag discloses the claimed invention, the memory comprising further instructions that, when executed, cause the system to: process the surgical video data using a trained video editing machine learning model to generate an edited surgical video data (Freytag ¶45-46), wherein: the trained video editing machine learning model is trained using historical surgical phase activity data, wherein the historical surgical phase activity data includes one or more images indicating one or more of a paracentesis, a medication injection, a viscoelastic insertion, a main wound, a capsulorrhexis initiation, a capsulorrhexis completion, a hydrodissection, a phacoemulsification, a cortical removal, a lens insertion, a viscoelastic removal, or a wound closure (Freytag ¶37-38); and the edited surgical video data removes one or more images(Freytag ¶45-46); one or more of: provide the edited surgical video data to the user device (Freytag ¶155); or process the edited surgical video data using one or more trained assessment machine learning models to generate one or more assessment metrics (Freytag ¶92, 140-149, 203-207). While disclosing removing frames based on redundance to shorten a surgical video (Freytag ¶149) and identifying phases in which surgeons are idle (Freytag ¶37, 45), Freytag fails to explicitly disclose removing one or more images capturing phase inactivity. In related art, Wolf discloses removing one or more images capturing phase inactivity (Wolf ¶150, 237, 679). Freytag states that “Although viewing surgery videos is beneficial for learning, surgery videos can be quite long, and surgeons might not have enough time to look through a whole video. In particular for learning, often only specific sub-parts of videos are relevant (Freytag ¶158).” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the removal of idle phases of Wolf into the evaluation method of Freytag to omit “portions of the identified specific frames, for example, to avoid redundancy, to shorten the resulting compilation, to remove less relevant or less informative portions, and so forth (Wolf ¶237).” Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2023/0172684 A1 discloses a video-based surgery analytics and quality/skills assessment system. Kim (‘Objective assessment of intraoperative technical skill in capsulorhexis using videos of cataract surgery.’) Yu (‘Assessment of Automated Identification of Phases in Videos of Cataract Surgery Using Machine Learning and Deep Learning Techniques’, from 05/22/2024 IDS). Yeh (‘PhacoTrainer: A Multicenter Study of Deep Learning for Activity Recognition in Cataract Surgical Videos’, from 05/22/2024 IDS). Matton (‘Analysis of Cataract Surgery Instrument Identification Performance of Convolutional and Recurrent Neural Network Ensembles Leveraging BigCat’, from 05/22/2024 IDS). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHLEY HYTREK whose telephone number is (703)756-4562. The examiner can normally be reached M-F 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steve Koziol can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ASHLEY HYTREK/Examiner, Art Unit 2665 /BOBBAK SAFAIPOUR/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Feb 13, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597122
DEFECT DETECTION DEVICE AND METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12555239
Microscopy System and Method for Image Segmentation
2y 5m to grant Granted Feb 17, 2026
Patent 12555357
SYSTEMS AND METHODS FOR CATEGORIZING IMAGE PIXELS
2y 5m to grant Granted Feb 17, 2026
Patent 12548291
VIDEO SIGNAL PROCESSING APPARATUS, VIDEO SIGNAL PROCESSING METHOD, AND IMAGING APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12548157
SYSTEMS AND METHODS FOR INLINE QUALITY CONTROL OF SLIDE DIGITIZATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
99%
With Interview (+11.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 83 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month