Prosecution Insights
Last updated: April 19, 2026
Application No. 17/452,813

SURGERY ASSISTANCE SYSTEM AND METHOD FOR GENERATING CONTROL SIGNALS FOR VOICE CONTROL OF MOTOR-CONTROLLED MOVABLE ROBOT KINEMATICS OF SUCH A SURGERY ASSISTANCE SYSTEM

Final Rejection §103§112
Filed
Oct 29, 2021
Examiner
CHOU, WILLIAM B
Art Unit
3795
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Aktormed GmbH
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
94%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
389 granted / 534 resolved
+2.8% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
27 currently pending
Career history
561
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 534 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments Examiner acknowledges the receipt of the Applicant’s Amendment dated July 8, 2025. Applicant amended claims 1 and 9. Applicant canceled claim 4. Claims 1-3 and 5-26 are pending. Applicant's arguments have been considered and are persuasive. Upon further search and consideration, the claims are rejected under 35 U.S.C. 103 as discussed below in view of the new grounds of rejection over Juergens (U.S. Publication 2019/0343588) as necessitated by the amendment. Claim Rejections - 35 USC § 112(b) The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Applicant has amended Claim 1 and the rejection under 35 U.S.C. 112(b) is withdrawn. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Applicant has amended claim 1 and the previous rejections are withdrawn. Claims 1-3 and 5-26 are rejected under 35 U.S.C. 103 as being unpatentable over Wright et al. (U.S. Publication 2023/0024942, hereinafter “Wright”) and in further views of Mai et al.(U.S. Publication 2022/0087768, hereinafter “Mai”; now U.S. Issued Patent 12,186,135), Black et al. (U.S. Publication 2003/0009086, hereinafter “Black”), and Juergens (U.S. Publication 2019/0343588). As to Claim 1, Wright discloses a surgery assistance system in Fig. 1 and (1126) in [0114] as shown in Fig. 7 for guiding an endoscope camera (109) in [0037] and [0039] and “endoscope” in [0114], at least a section of the endoscope camera can be introduced through a first surgical opening and is movable in a controlled manner in an operating space of a patient body “patient’s body cavity” in [0147], the system comprising: the endoscope camera (109) in [0039] and “endoscope” in [0114] for capturing images of the operating space in a form of image data, and a robot kinematics, a free end of which accommodates the endoscope camera by an auxiliary instrument carrier “movable camera arm” in [0037] and (112) in [0039], wherein the robot kinematics is movable in a motor-controlled manner “comprising one or more electric motors” in [0037] for guiding the endoscope camera in the operating space, on a basis of control signals generated by a control unit (110) in [0039] and (1105) in [0115], wherein at least one voice control routine via “voice control system or the like” in [0039] is executed in the control unit, by which voice commands or voice command combinations in a form of voice data “voice input” in [0115] are captured via “natural user interface (NUI) input/output” in [0115], evaluated, and on the basis thereof the control signals generated by the control unit are generated as described in [0134] and [0138], and at least one image capture routine being executed in the control unit for continuous acquisition of the image data relating to the operating space that are provided by the endoscope camera as described in [0126], wherein at least one mage analysis routine is provided in the control unit, by which the image data, previously captured, are continuously evaluated and classified on based upon statistical and/or artificial intelligence self-learning methods “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100], that object and/or scene related information relating to a surgical scene currently being captured by the endoscope camera in an image is determined by the continuous evaluation and classification of the image data during normal operation. However Wright does not specifically disclose captured voice data evaluated on the basis of the captured object and/or scene related information. Mai teaches in the related field of endoscopy wherein voice commands and voice cues in [0091], [0093], and [0109] are utilized to perform measurements on the basis of scene related information. Black teaches in the related field of endoscopy wherein endoscope commands via voice commands using computer science and artificial intelligence in [0030]. It would have been obvious to one of ordinary skill in the art to provide the voice commands of Wright to be evaluated on the basis of captured object and/or scene related information as taught by Mai and Black in order to fulfill additional control functions with predictable results. Although Wright discloses accessing a database in [0138], Wright does not specifically disclose the voice control routine evaluates the voice data based on statistical and/or artificial intelligence self-learning methods. Juergens teaches in the analogous field of endoscopy wherein the voice control routine evaluates the voice data based on statistical and/or artificial intelligence self-learning methods as described in [0004]. It would have been obvious to one of ordinary skill in the art at the time of invention to provide the system of Wright capable of accessing a database with further statistical and machine learning as taught by Juergens in order to fulfill the same function of accessing additional information with predictable results. As to Claim 2, Wright discloses the surgery assistance system according to Claim 1, wherein the image analysis routine comprises a neural network with pattern and/or color detection algorithms for evaluating the captured image data “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100]. As to Claim 3, Wright discloses the surgery assistance system according to Claim 2, wherein the pattern and/or color detection algorithms “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100] are configured and trained to capture or detect objects or parts thereof which are present in the image, in surgical instruments, in medical tools, or in organs. As to Claim 5, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 4, wherein the voice control routine comprises a neural network with sound and/or syllable recognition algorithms for evaluating the voice data as described in [0138]. As to Claim 6, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 5, wherein the sound and/or syllable recognition algorithms are configured to capture sounds, syllables, words, gaps in speech and/or combinations thereof contained in the voice data as described in [0138]. As to Claim 7, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 5, wherein the voice control routine is configured for an evaluation of the voice data on the basis of the object and/or scene related information as described in [0138]. As to Claim 8, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 7, wherein the voice control routine captures object and/or scene related voice commands contained in the voice data as described in [0135], wherein at least one control signal is generated by the voice control routine on the basis of the object and/or scene related voice commands, previously captured, via which at least movement of the endoscope camera is controlled in terms of direction, speed and/or magnitude. As to Claim 9, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 4, wherein the voice control routine captures and evaluates directional and/or speed information and/or associated magnitude information in the voice data as described in [0135]. As to Claim 10, Wright discloses the surgery assistance system according to Claim 1, wherein the endoscope camera is designed to capture a two-dimensional image as described in [0135]. As to Claim 11, Wright discloses the surgery assistance system according to Claim 1, wherein a two-dimensional image coordinate system is assigned to the image via the image analysis routine “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100]. As to Claim 12, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 11, wherein in order to determine an orientation and/or position of an object in the image coordinates (X, Y) of the object or at least of a marker or marker point of the object are determined in a screen coordinate system “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100]. As to Claim 13, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 1, wherein surgical instruments and/or organs and/or other medical tools displayed in the image are detected as objects or parts of objects by the image analysis routine “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100]. As to Claim 14, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 13, wherein in order to detect objects or parts of objects, one or more markers or marker points of an object is/are detected by the image analysis routine, wherein an instrument tip, special color or material properties of the object and/or an articulation point between a manipulator and an instrument shaft of a surgical instrument are used as markers or marker points “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100]. As to Claim 15, Wright discloses the surgery assistance system according to Claim 14, wherein the markers or marker points, previously detected, are evaluated by the image analysis routine for classifying the surgical scene and/or the objects located therein, and the object related and/or scene related information is determined on the basis thereof “via neural network image classification” in [0045]-[0046], [0052], and [0099]-[0100]. As to Claim 16, Wright in views of Mai and Black discloses the surgery assistance system according to Claim 15, wherein the object related and/or scene related information determined by the image analysis routine is transferred to the voice control routine as described in [0135]. Claims 17-26 substantially recite the limitations of claims 1-3, 5-7, 11, 12, and 14-16 respectively in method form and are similarly rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See the enclosed 892 form. 20180067309 ([0039]) and 20170337683 ([0118]) are cited to show usage of artificial intelligence Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM B CHOU whose telephone number is (571) 270-3367. The examiner can normally be reached on M-F 9 am - 6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Carey can be reached on (571) 270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM CHOU/ Examiner, Art Unit 3795 /MICHAEL J CAREY/Supervisory Patent Examiner, Art Unit 3795
Read full office action

Prosecution Timeline

Oct 29, 2021
Application Filed
Oct 25, 2024
Response after Non-Final Action
Feb 08, 2025
Non-Final Rejection — §103, §112
Jul 08, 2025
Response Filed
Oct 25, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569119
Optical Bulb for Surgical Instrument Port
2y 5m to grant Granted Mar 10, 2026
Patent 12508091
SYSTEMS AND METHODS FOR SWITCHING CONTROL BETWEEN MULTIPLE INSTRUMENT ARMS
2y 5m to grant Granted Dec 30, 2025
Patent 12510746
INCREASED RESOLUTION AND DYNAMIC RANGE CAPTURE UNIT IN A SURGICAL INSTRUMENT AND METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12507881
SYSTEM FOR OBTAINING CLEAR ENDOSCOPE IMAGES
2y 5m to grant Granted Dec 30, 2025
Patent 12507906
INTEGRATED MULTI-FUNCTIONAL ENDOSCOPIC TOOL
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
94%
With Interview (+21.4%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 534 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month