Prosecution Insights
Last updated: April 19, 2026
Application No. 18/622,069

VOICE-NUDGE SYSTEM AND METHOD FOR HANDS-FREE MOVEMENT OF A ROBOTIC SURGICAL MICROSCOPE

Non-Final OA §102
Filed
Mar 29, 2024
Examiner
NEWAY, SAMUEL G
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Synaptive Medical, Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
83%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
517 granted / 686 resolved
+13.4% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
34.5%
-5.5% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
20.1%
-19.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 686 resolved cases

Office Action

§102
DETAILED ACTION This is responsive to the application filed 29 March 2024. Claims 1-20 are pending and considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Applicant is advised that should claims 9-12 be found allowable, claims 16-19 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Richmond et al. (US 2017/0143429). Claim 1: Richmond discloses a voice-nudge system (Abstract, see also [0006]), the system comprising: a voice-control feature configured to receive at least one voice command via at least one user interface and to apply the at least one voice command for controlling incremental movement of an end effector, operable with a scope, in at least one direction (“The medical navigation system further has an automated arm assembly (e.g., automated arm 102) electrically coupled to the computing device and controlled by a signal provided by the computing device. The automated arm assembly includes a multi-joint arm having a distal end connectable to an effector (e.g., the end effector 104) that supports a surgical camera (e.g., which may be attached to or part of the scope 266) electrically coupled to the computing device. The medical navigation system further has a medical device having a tracking marker (e.g., the tracking markers 206 and/or 246) attachable to the medical device. The computing device may be configured to position the automated arm assembly, based on an input command, in response to a position in space of the medical device such that a surgical site of interest remains within a field of view of the surgical camera”, [0097], see also “the input command may be provided by any one of the foot pedal 155, a joystick, a microphone receiving a voice instruction”, [0098]), and the at least one direction comprising at least one of: a left direction, a right direction, an up direction, a down direction, a roll direction, a pitch direction, and a yaw direction (“Automated arm 102 may have multiple joints to enable 5, 6 or 7 degrees of freedom”, [0041], note that 7 degrees of freedom allows movement in the claimed directions, see also “the subscript “e” denotes the coordinates of the end effector and the variables α, β, and γ represent roll, pitch, and yaw respectively”, [0128]), whereby a field of view and a magnification level of the scope is maintained (“such that a surgical site of interest remains within a field of view of the surgical camera”, [0097], note that field of view and magnification are inversely related, therefore maintaining a field of view implies maintaining a magnification level, see “provide a reduced field of view 2280 and therefore higher magnification”, [0203]). Claim 2: Richmond discloses the system of claim 1, wherein the voice-control feature is configured to operate with the end effector coupled with a robotic arm ([0097]). Claim 3: Richmond discloses the system of claim 1, wherein the voice-control feature is configured to operate with the end effector coupled with the scope comprising at least one of: a videoscope, a microscope, and exoscope ([0040], see also [0042] and [0058]). Claim 4: Richmond discloses the system of claim 1, wherein the voice-control feature is configured to perform a plurality of operations after entering a nudge-mode, the plurality of operations comprising a first operation and a second operation (“If the pose error is greater than the threshold the flow chart continues to step (850) where the end effector error 720 is determined by the intelligent positioning system as a desired movement. The final step (860) requires the intelligent positioning system to calculate the required motion of each joint of the automated arm 102 and command these movements. The system then repeats the loop and continuously takes new pose estimations from the intelligent positioning system 250 to update the error estimation of the end effector spatial position and pose”, [0087]). Claim 5: Richmond discloses the system of claim 4, wherein the voice-control feature is configured to perform the first operation after entering the nudge-mode by receiving a command from at least one user interface (“The computing device may further have a foot pedal, such as the foot pedal 155, coupled to the computing device and the automated arm assembly may move only when input is received from the foot pedal. In other words, as a safety feature, the automated arm assembly may remain stationary except when the surgeon 201 presses a button on the foot pedal 155, at which time the automated arm assembly may move into proper position based on the current position in space of the medical device being tracked.”, [0104]). Claim 6: Richmond discloses the system of claim 5, wherein the voice-control feature is configured to perform the first operation after entering the nudge-mode by receiving the command from the at least one user interface comprising at least one of: a voice control input, a touchscreen, and a quick menu overlay on an external display device (“While the example of a foot pedal 155 is used, any suitable input device may be used to meet the design criteria of a particular application, including any input device mentioned herein”, [0104], see also [0098], [0178] and [0137]). Claim 7: Richmond discloses the system of claim 4, wherein the voice-control feature is configured to perform the second operation after entering the nudge-mode by receiving a command from at least one user interface comprising a button of a foot pedal (“The computing device may further have a foot pedal, such as the foot pedal 155, coupled to the computing device and the automated arm assembly may move only when input is received from the foot pedal. In other words, as a safety feature, the automated arm assembly may remain stationary except when the surgeon 201 presses a button on the foot pedal 155, at which time the automated arm assembly may move into proper position based on the current position in space of the medical device being tracked.”, [0104]). Claims 8-14: Richmond discloses a method of providing a voice-nudge system, the method comprising: providing a voice-control feature configured to perform the steps performed by the voice-nudge system of claims 1-7 as shown above. Claim 15: Richmond discloses a method of maintaining a field of view and a magnification level of a scope by way of a voice-nudge system (Abstract, see also [0006]), the method comprising: providing the voice-control feature configured to receive at least one voice command via at least one user interface and to apply the at least one voice command for controlling incremental movement of an end effector, operable with a scope, in at least one direction (“The medical navigation system further has an automated arm assembly (e.g., automated arm 102) electrically coupled to the computing device and controlled by a signal provided by the computing device. The automated arm assembly includes a multi-joint arm having a distal end connectable to an effector (e.g., the end effector 104) that supports a surgical camera (e.g., which may be attached to or part of the scope 266) electrically coupled to the computing device. The medical navigation system further has a medical device having a tracking marker (e.g., the tracking markers 206 and/or 246) attachable to the medical device. The computing device may be configured to position the automated arm assembly, based on an input command, in response to a position in space of the medical device such that a surgical site of interest remains within a field of view of the surgical camera”, [0097], see also “the input command may be provided by any one of the foot pedal 155, a joystick, a microphone receiving a voice instruction”, [0098]), and the at least one direction comprising at least one of: a left direction, a right direction, an up direction, a down direction, a roll direction, a pitch direction, and a yaw direction (“Automated arm 102 may have multiple joints to enable 5, 6 or 7 degrees of freedom”, [0041], note that 7 degrees of freedom allows movement in the claimed directions, see also “the subscript “e” denotes the coordinates of the end effector and the variables α, β, and γ represent roll, pitch, and yaw respectively”, [0128]), whereby a field of view and a magnification level of the scope is maintained (“such that a surgical site of interest remains within a field of view of the surgical camera”, [0097], note that field of view and magnification are inversely related, therefore maintaining a field of view implies maintaining a magnification level, see “provide a reduced field of view 2280 and therefore higher magnification”, [0203]). Claims 16-19: Richmond discloses a method of providing a voice-nudge system, the method comprising: providing a voice-control feature configured to perform the steps performed by the voice-nudge system of claims 2-5 as shown above. Claim 20: Richmond discloses the method of claim 12, wherein providing the voice-control feature comprises configuring the voice-control feature to perform the first operation after entering the nudge-mode by receiving the command from the at least one user interface comprising at least one of: a voice control input, a touchscreen, and a quick menu overlay on an external display device (“While the example of a foot pedal 155 is used, any suitable input device may be used to meet the design criteria of a particular application, including any input device mentioned herein”, [0104], see also [0098], [0178] and [0137]), and wherein providing the voice-control feature comprises configuring the voice-control feature to perform the second operation after entering the nudge-mode by receiving a command from at least one user interface comprising a button of a foot pedal (“The computing device may further have a foot pedal, such as the foot pedal 155, coupled to the computing device and the automated arm assembly may move only when input is received from the foot pedal. In other words, as a safety feature, the automated arm assembly may remain stationary except when the surgeon 201 presses a button on the foot pedal 155, at which time the automated arm assembly may move into proper position based on the current position in space of the medical device being tracked.”, [0104]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nathan et al. ("The voice-controlled robotic assist scope holder AESOP for the endoscopic approach to the sella." Skull base 16.03 (2006): 123-131) discloses a method to evaluate the feasibility of using a voice-controlled robot Automated Endoscopic System for Optimal Positioning (AESOP) for holding and maneuvering the endoscope in the trans-sphenoidal approach to the pituitary. This is performed by comparing the manual approach to the voice-activated robotic scope holder in maneuvering the endoscope and resecting pituitary lesions using a two-handed technique. Bailey et al. discloses a medical navigation system is provided, comprising a computing device having a processor coupled to a memory, a tracking camera for tracking medical devices, and a display for displaying an image; an automated arm assembly electrically coupled to be computing device and controlled by a signal provided by the computing device, the automated arm assembly including a multi-joint arm having a distal end connectable to an effector that supports a surgical camera electrically coupled to the computing device; and a medical device having a tracking marker attachable to the medical device. The computing device is configured to position the automated arm assembly, based on an input command, in response to a position in space of the medical device such that a surgical site of interest remains within a field of view of the surgical camera, the position in space of the medical device determined by the computing device based on a signal provided to the computing device by the tracking camera; and display on the display an image provided by an image signal generated by the surgical camera. Dell et al. (US 2017/0202628) discloses a medical navigation system including a surgical positioning system for positioning a payload during a medical procedure. The medical navigation system has a robotic arm having a plurality of joints, the robotic arm forming part of the surgical positioning system and having an end effector for holding the payload, an input device for providing input, and a controller electrically coupled to the robotic arm and the input device. The controller has a processor coupled to a memory and the controller is configured to perform the following during the medical procedure: position the robotic arm in a first position by providing a first positioning signal to the robotic arm; save the first position in the memory as a first saved position in response to a signal received from the input device; position the robotic arm in a second position by providing a second positioning signal to the robotic arm; and return the robotic arm to the first position by loading the first saved position from the memory and providing the first positioning signal to the robotic arm when an input is received from the input device corresponding to a command to return to the first saved position. Mak et al. (US 2022/0113526) discloses a surgical microscope system and methods involving: an optical assembly operable in response to a selected operation mode, the optical assembly having adjustable optics; a controller coupled with the optical assembly, the controller configured to: select an operation mode corresponding to at least one phase of the medical procedure and defining a setting for adjusting an adjustable optics; control the optical assembly to adjust an adjustable optics according to a setting; select another operation mode if at least one of an abnormal phase and an emergency phase of the medical procedure is determined to exist, the other mode of operation corresponding to another setting for adjusting the adjustable optics, the other setting having at least one of a magnification setting and a focus setting; and control the optical assembly to adjust the adjustable optics according to the other setting. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL G NEWAY whose telephone number is (571)270-1058. The examiner can normally be reached Monday-Friday 9:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SAMUEL G NEWAY/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602538
METHOD AND SYSTEM FOR EXEMPLAR LEARNING FOR TEMPLATIZING DOCUMENTS ACROSS DATA SOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12603177
INTERACTIVE CONVERSATIONAL SYMPTOM CHECKER
2y 5m to grant Granted Apr 14, 2026
Patent 12603092
AUTOMATED ASSISTANT CONTROL OF NON-ASSISTANT APPLICATIONS VIA IDENTIFICATION OF SYNONYMOUS TERM AND/OR SPEECH PROCESSING BIASING
2y 5m to grant Granted Apr 14, 2026
Patent 12596734
PARSE ARBITRATOR FOR ARBITRATING BETWEEN CANDIDATE DESCRIPTIVE PARSES GENERATED FROM DESCRIPTIVE QUERIES
2y 5m to grant Granted Apr 07, 2026
Patent 12596892
MACHINE TRANSLATION SYSTEM FOR ENTERTAINMENT AND MEDIA
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
83%
With Interview (+7.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 686 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month