Prosecution Insights
Last updated: April 19, 2026
Application No. 17/651,412

SYSTEMS AND METHODS FOR CONTROLLING A SURGICAL PUMP USING ENDOSCOPIC VIDEO DATA

Non-Final OA §103
Filed
Feb 16, 2022
Examiner
DUONG, HIEN LUONGVAN
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Stryker Corporation
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
480 granted / 643 resolved
+19.7% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
42 currently pending
Career history
685
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 643 resolved cases

Office Action

§103
DETAILED ACTION Remarks This office action is issued in response to communication filed on 11/26/2025 . Claims 1-8 , 10-21 and 23-34 and 36-39 are pending in this Office Action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 11/26/2025 with respect to rejection of claims under 35 USC 102 and 103 have been considered and are moot in view of new ground of rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1,6-8,10-11,13-14,19-21, 23-24,26-27,32-34,36-37 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Holmstrom.(US Patent Application Publication 2023/0346392A1, hereinafter “Holmstrom”), and further in view of Shelton, IV et al.(US Patent Application Publication 2019/0201125 A1, hereinafter “Shelton”) As to claim 1, Holmstrom teaches a method for controlling a fluid pump for use in surgical procedures, the method comprising: receiving video data captured from an imaging tool configured to image an internal portion of a patient (Holmstrom par [0042] teaches receiving an image of the view of the area .Holmstrom par [0042] teaches the image analysis engine receives an image of the view of the area within the body of the of the patient that is within a field of view of vision of the endoscope) ; [applying one or more machine learning classifiers to the received video data to generate one or more classification metrics based on the received video data, wherein the one or more machine learning classifiers are created using a supervised training process that comprises using one or more annotated images to train the one or more machine learning classifier , wherein the one or more machine learning classifiers comprise an image clarity classifier configured to generate one or more classification metrics associated with a presence of at least one of blood, turbidity, bubles, smoke, or debris in the received video data ]; determining the presence of one or more conditions in the received video data based on the generated one or more classification metrics; and adjusting a setting for the flow through or head pressure from the fluid pump based on the determined presence of the one or more conditions in the received video data. (Holmstrom par [0043]-[0045] teaches the image analysis engine can determine a characteristic of the image received using neural network. Holmstrom par [0053] teaches the control engine is to control the medium management based on the characteristics of the image ) Holmstrom fails to expressly teach applying one or more machine learning classifiers to the received video data to generate one or more classification metrics based on the received video data, wherein the one or more machine learning classifiers are created using a supervised training process that comprises using one or more annotated images to train the one or more machine learning classifier , wherein the one or more machine learning classifiers comprise an image clarity classifier configured to generate one or more classification metrics associated with a presence of at least one of blood, turbidity, bubbles, smoke, or debris in the received video data. However, Shelton teaches applying one or more machine learning classifiers to the received video data to generate one or more classification metrics based on the received video data (Shelton par [0841] teaches image recognition algorithms can be implemented to identify features or objects in still frames of a surgical site that are captured by the frame grabber 3200), wherein the one or more machine learning classifiers are created using a supervised training process that comprises using one or more annotated images to train the one or more machine learning classifier , (Shelton par [0843] teaches an example of image recognition algorithm) wherein the one or more machine learning classifiers comprise an image clarity classifier configured to generate one or more classification metrics associated with a presence of at least one of blood, turbidity, bubbles, smoke, or debris in the received video data. (Shelton par [0863] teaches one or more still frames can be taken during the testing which can be later analyzed by the imaging module . The testing mechanisms include bubble detection, bleeding detection, dye detection and/or burst stretch detection) therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teaching of Holmstrom and Shelton to achieve the claimed invention. One would have been motivated to make such combination to improve outcomes of medical procedures.(Shelton par [04999]) As to claim 6, Holmstrom and Shelton teach the method of claim 1, wherein the one or more machine learning classifiers comprises an instrument identification machine classifier configured to generate one or more classification metrics associated with identifying one or more instruments in the received video data. ( Shelton par [0860] teaches still frames of an end effector of surgical instrument at a surgical site can be used to identify the surgical instrument) As to claim 7, Holmstrom and Shelton teach the method of claim 6, wherein the instrument identification machine classifier is configured to identify instruments selected from the group consisting of a shaver tool, a radio frequency (RF) probe, and a dedicated suction device. (Shelton par [1235] teaches shavers) As to claim 8, Holmstrom and Shelton teach the method of claim 6, wherein the fluid pump is configured to activate a suction functionality of the one or more instruments based on the one or more classification metrics generated by the instrument identification machine classifier. ( Shelton par [0527] teaches the suction / irrigation module is coupled to a surgical tool and one or more drive systems can be configured to cause irrigation and aspiration of fluids to and from the surgical site) As to claim 10, Holmstrom and Shelton teach the method of claim 1, wherein the image clarity machine classifier is configured to generate one or more classification metrics associated with an amount of blood visible in the received video data.(Shelton par [0863] teaches one or more still frames can be taken during the testing which can be later analyzed by the imaging module . the testing mechanisms include bubble detection, bleeding detection, dye detection and/or burst stretch detection) As to claim 11, Holmstrom and Shelton teach the method of claim 1, wherein the image clarity machine classifier is configured to generate one or more classification metrics associated with an amount of bubbles visible in the received video data. (Shelton par [0863] teaches one or more still frames can be taken during the testing which can be later analyzed by the imaging module . the testing mechanisms include bubble detection, bleeding detection, dye detection and/or burst stretch detection) As to claim 13, Holmstrom and Shelton teach the method of claim 1, wherein determining the presence of one or more conditions in the received video data based on the generated one or more classification metrics comprises determining if a clarity of the video is above a pre-determined threshold, and wherein the determination is based on the one more classification metrics generated by the image clarity machine classifier.(Holmstrom par [0070] teaches low quality and high quality) Claims 14, 19-21, 23-24 and 26 merely recites a system to perform the method of claims 1,6-8, 10-11 and 13 respectively. Accordingly, Holmstrom and Shelton teach every limitation of Claims 14, 19-21, 23-24 and 26 as indicates in the above rejection of claims 1,6-8, 10-11 and 13 respectively. Claims 27,32-34, 36-37 and 39 merely recites a non-transitory computer readable storage medium storing one or more program when executed by a processor , performs the method of claims 1,6-8, 10-11 and 13 respectively. Accordingly, Holmstrom and Shelton teach every limitation of Claims 27,32-34, 36-37 and 39 as indicates in the above rejection of 1,6-8, 10-11 and 13 respectively. Claims 2-5 ,15-18 and 28-31 are rejected under 35 U.S.C. 103 as being unpatentable over Holmstrom , Shelton and further in view of Sreenivasan et al.(US Patent Application Publication 2019/0362835 A1, hereinafter “Sreenivasan”) As to claim 2, Holmstrom and Shelton teach the method of claim 1 but fail to teach wherein the one or more machine learning classifiers comprises a joint type machine learning classifier configured to generate one or more classification metrics associated with identifying a type of joint pictured in the received video data. However, Sreenivasan teaches wherein the one or more machine learning classifiers comprises a joint type machine learning classifier configured to generate one or more classification metrics associated with identifying a type of joint pictured in the received video data.( Sreenivasan [0032] teaches the system comprises an image modality classifier trained to utilize one or more parameters, features or other aspects of an image to determine the imaging modality utilized to obtain the image. Sreenivasan [0054] teaches the image comprises the elbow region of a right arm) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teaching of Holmstrom, Shelton and Sreenivasan to achieve the claimed invention. One would have been motivated to make such combination to improve patient care.( Sreenivasan par [0004]) As to claim 3, Holmstrom, Shelton and Sreenivasan teach the method of claim 2, wherein the joint type machine learning classifier is configured to identify one or more joints selected from the group consisting of a hip, a shoulder, a knee, an ankle, a wrist, and an elbow. (Sreenivasan [0054] teaches the image comprises the elbow region of a right arm) As to claim 4, Holmstrom, Shelton and Sreenivasan teach the method of claim 3, wherein the joint type machine learning classifier is configured to generate one or more classification metrics associated with identifying whether the imaging tool is not within a joint.(Shelton par [01850 teaches the robotic system can automatically activate one or more features of one or more surgical tools based on data, images . for example, a suctioning element on a surgical tool can be activated when the suction port is moved into contact with a fluid. Sreenivasan [0054] teaches the image comprises the elbow region of a right arm) As to claim 5, Holmstrom, Shelton and Sreenivasan teach the method of claim 4, wherein the one or more machine learning classifiers include a procedure stage machine learning classifier configured to generate one or more classification metrics associated with identifying a procedure stage being performed in the received video data. (Shelton par [0857] teaches the imaging module of the surgical hub 106 is capable of differentiating between surgical steps of a surgical procedure based on the captured frames) As to claims 15-18 and 28-31, see the above rejection of claims 2-5 respectively. Claims 12, 25 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Holmstrom , Shelton and further in view of Vasilakakis et al. “Weakly supervised multilabel classification for semantic interpretation of endoscopy video frames”, Evolving Systems (publication 2020), hereinafter “ Vasilakakis” As to claim 12, Holmstrom, Shelton teach the method of claim 1 but fail to teach wherein the image clarity machine classifier is configured to generate one or more classification metrics associated with an amount of debris visible in the received video data. However, Vasilakakis teaches wherein the image clarity machine classifier is configured to generate one or more classification metrics associated with an amount of debris visible in the received video data. (Vasilakakis’s abstract teaches in the context of gastrointestinal video-endoscopy, addressed in this study, the semantics of the normal contents of the endoscopic video frames include normal mucosal tissues, bubbles, debris and the hole of the lumen, whereas the abnormal video frames may include additional semantics corresponding to lesions or blood) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teaching of Holmstrom, Shelton and Vasilakakis to achieve the claimed invention. One would have been motivated to make such combination to provide enhanced discrimination of the gastrointestinal abnormalities.( Vasilakakis’s abstract) As to claims 15 and 38, see the above rejection of claims 12. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HIEN DUONG whose telephone number is (571)270-7335. The examiner can normally be reached Monday-Friday 8:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HIEN L DUONG/Primary Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Feb 16, 2022
Application Filed
Mar 18, 2025
Non-Final Rejection — §103
Jun 16, 2025
Applicant Interview (Telephonic)
Jun 16, 2025
Examiner Interview Summary
Jun 23, 2025
Response Filed
Sep 05, 2025
Final Rejection — §103
Oct 21, 2025
Interview Requested
Nov 26, 2025
Request for Continued Examination
Dec 07, 2025
Response after Non-Final Action
Dec 12, 2025
Non-Final Rejection — §103
Feb 09, 2026
Applicant Interview (Telephonic)
Feb 09, 2026
Examiner Interview Summary
Mar 20, 2026
Interview Requested
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597925
SUPERCONDUCTING CURRENT CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12566940
METHOD AND APPARATUS FOR QUANTIZING PARAMETERS OF NEURAL NETWORK
2y 5m to grant Granted Mar 03, 2026
Patent 12566815
METHOD, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM FOR PERFORMING IDENTIFICATION BASED ON MULTI-MODAL DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12554798
FINDING OUTLIERS IN SIMILAR TIME SERIES SAMPLES
2y 5m to grant Granted Feb 17, 2026
Patent 12547430
MODEL-BASED ELEMENT CONFIGURATION IN A USER INTERFACE
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
98%
With Interview (+22.8%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 643 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month