Prosecution Insights
Last updated: April 19, 2026
Application No. 18/901,645

SYSTEMS AND METHODS TO ESTIMATE FORCE OF AN INSTRUMENT

Non-Final OA §102§103
Filed
Sep 30, 2024
Examiner
KATZ, DYLAN MICHAEL
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Case Western Reserve University
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
242 granted / 279 resolved
+34.7% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 9-15, 18-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wolf et al (US 20200273575, hereinafter Wolf). Regarding Claim 1, Wolf teaches: a method (see at least "Embodiments of this disclosure include systems, methods, and computer readable media for estimating contact force on an anatomical structure during a surgical procedure disclosed." in par. 0022) comprising: estimating, by a processor, a spatial position of a portion of an instrument that is adapted to interact with an environmental structure to provide an estimated spatial position for the instrument, in which the spatial position is estimated based on image data that includes at least one image frame of the portion of the instrument and the environmental structure (see at least "For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure, to determine a condition of an anatomical structure, to determine pressure applied to an anatomical structure, or to determine any other information where multiple viewing angles may be beneficial." in par. 0088) ; and responsive to detecting contact between the instrument and the environmental structure, estimating, by the processor, a measure of force between the instrument and the environmental structure based on the estimated spatial position for the instrument. (see at least "For example, visual action recognition algorithms may be used to analyze the video and detect the interactions between the medical instrument and the anatomical structure." in par. 0115 and “Consistent with the present embodiments, estimating contact force may include analyzing images and/or surgical video to generate an estimate of a magnitude of an actual contact force according to a scale. Force estimation through image analysis may involve an examination of a tissue/modality interface to observe an effect on the tissue. For example, if the modality is a medical instrument such as forceps pressing against an organ such as a gallbladder, machine vision techniques applied to the location of force application may reveal movement and/or changes of the organ that is reflective of the force applied.” In par. 0584) Regarding Claim 2, Wolf teaches: 2. The method of claim 1, further comprising: classifying, by the processor, an interaction between the instrument and the environmental structure as one of a contact condition or a non-contact condition based on an analysis of the at least one image frame (see at least “In another example, the distance between the medical instrument and the anatomical structure may be compared with a selected threshold, and distinguishing the first group of frames from the second group of frames may further be based on a result of the comparison. For example, the threshold may be selected based on the type of the medical instrument, the type of the anatomical structure, the type of the surgical procedure, and so forth. In other embodiments, distinguishing the first group of frames from the second group of frames may further be based on a detected interaction between the medical instrument and the anatomical structure. An interaction may include any action by the medical instrument that may influence the anatomical structure, or vice versa. For example, the interaction may include a contact between the medical instrument and the anatomical structure… An example of such training example may include a video clip of a surgical procedure, together with a label indicating the presence of particular interactions between medical instruments and anatomical structures in the video clip, or together with a label indicating the absence of particular interactions between medical instruments and anatomical structures in the video clip.” In par. 0205), wherein estimating the measure of force is performed responsive to the interaction between the instrument and the environmental structure being classified as the contact condition. (see at least “Consistent with the present embodiments, estimating contact force may include analyzing images and/or surgical video to generate an estimate of a magnitude of an actual contact force according to a scale. Force estimation through image analysis may involve an examination of a tissue/modality interface to observe an effect on the tissue. For example, if the modality is a medical instrument such as forceps pressing against an organ such as a gallbladder, machine vision techniques applied to the location of force application may reveal movement and/or changes of the organ that is reflective of the force applied.” In par. 0584) Regarding Claim 3, Wolf teaches: 3. The method of claim 2, further comprising: tracking the estimated spatial position for the instrument over a time interval from when the contact condition is detected and while the contact condition is maintained (see at least " In various cases, when a camera (e.g., camera 115) tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument" in par. 0308) , and wherein estimating the measure of force comprises estimating the measure of force over the time interval based on the tracked estimated spatial position for the instrument. (see at least " Moreover, the force may be estimated at discrete points in time or may be estimated continuously. In some embodiments, an estimate of a contact force may include an estimate of a contact location, a contact angle, and/or an estimate of any other feature of contact force." in par. 0584) Regarding Claim 4, Wolf teaches: 4. The method of claim 2, wherein, responsive to the interaction between the instrument and the environmental structure being classified as the non-contact condition (see at least "At step 940, process 900 may include distinguishing the first group of frames from the second group of frames based on the relative movement, wherein the first group of frames includes surgical activity frames and the second group of frames includes non surgical activity frames. " in par. 0204 and “For example, the interaction may include a contact between the medical instrument and the anatomical structure, an action by the medical instrument on the anatomical structure (such as cutting, clamping, applying pressure, scraping, etc.), a reaction by the anatomical structure (such as a reflex action), or any other form of interaction.” In par. 0205) , (i) estimating the measure of force is not performed, and/or (ii) the estimated measure of force is deleted or discarded. (see at least "Accordingly, presenting an aggregate of the first group of frames may thereby enable a surgeon preparing for surgery to omit the non-surgical activity frames during a video review of the abridged presentation. In some embodiments, omitting the non-surgical activity frames may include omitting a majority of frames that capture non-surgical activity." in par. 0204) Regarding Claim 5, Wolf teaches: 5. The method of claim 1, wherein estimating the spatial position further comprises: identifying geometric features of the portion of the instrument based on the at least one image frame (see at least " At step 930, process 900 may include analyzing the video to detect a relative movement between the detected medical instrument and the detected anatomical structure. Relative movement may be detected using a motion detection algorithm, for example, based on changes in pixels between frames, optical flow, or other forms of motion detection algorithms" in par. 0204) ; and determining locations of pixels or voxels for each of the identified geometric features in the at least one image frame, wherein the estimated spatial position is determined based on the locations of the pixels or voxels for the identified geometric features. (see at least "For example, motion detection algorithms may be used to estimate the motion of the medical instrument in the video and to estimate the motion of the anatomical structure in the video, and the estimated motion of the medical instrument may be compared with the estimated motion of the anatomical structure to determine the relative movement" in par. 0204) Regarding Claim 9, Wolf teaches: 9. The method of claim 1, wherein the instrument is a robotically controlled instrument and the environmental structure comprises biological tissue and/or other structures on or within a region of interest. (see at least " A surgeon may include any person performing a surgical procedure, including a doctor or other medical professional, any person assisting a surgical procedure, and/or a surgical robot. " in par. 0535 and “Video footage may depict any aspect of a surgical procedure and may depict a patient (internally or externally), a medical professional, a robot, a medical tool, an action, and/or any other aspect of a surgical procedure.” In par. 0537) Regarding Claim 10, Wolf teaches: 10. The method of claim 1, further comprising controlling sensory perceptible feedback for a user of the instrument based on the estimated measure of force. (see at least " In some embodiments, a method for estimating contact force on an anatomical structure may include outputting a notification based on a determination that an indication of actual contact force exceeds a selected contact force threshold. Outputting a notification may include transmitting a recommendation to a device, displaying a notification at an interface, playing a sound, providing haptic feedback, and/or any other method of notifying an individual of excessive force applied." in par. 0600) Regarding Claim 11, Wolf teaches: 11. The method of claim 10, further comprising: scaling the estimated measure of force (see at least " Consistent with the present embodiments, estimating contact force may include analyzing images and/or surgical video to generate an estimate of a magnitude of an actual contact force according to a scale. " in par. 0584) ; and generating a feedback signal based on the scaled and estimated measure of force (see at least " a notification may include information specifying that a contact force has exceeded or failed to exceed a selected contact force threshold. In some embodiments, a notification may include information relating to a selected contact force and/or an estimate of an actual contact force, including an indication of a contact angle, a magnitude of a contact force, a contact location, and/or other information relating to a contact force." in par. 0601) ; and providing the sensory perceptible feedback based on the feedback signal (see at least " In some examples, notifications of different intensity (i.e., severity or magnitude) may be provided according to an indication of actual force. For example, outputting a notification may be based on a difference between an indication of actual force and a selected force threshold or a comparison of an indication of actual force with a plurality of thresholds. A notification may be based on a level of intensity of an actual force or an intensity of a difference between an actual force and a selected force threshold. In some embodiments, a notification may include information specifying a level of intensity." in par. 0602) . Regarding Claim 12, Wolf teaches: 12. The method of claim 10, wherein the sensory perceptible feedback comprises at least one of audible feedback, visual feedback, and/or physical feedback. (see at least " Such a recommendation may include any guidance, regardless of the form of the guidance (e.g., audio, video, text-based, control commands to a surgical robot, or other data transmission that provides advice and/or direction)." in par. 0553 and “In some embodiments, outputting a recommendation may include playing a sound, altering a light (e.g., turning a light on or off, pulsing a light), providing a haptic feedback signal, and/or any other method of alerting a person or providing information to a person or surgical robot.” In par. 0554) Regarding Claim 13, Wolf also teaches: A system, comprising: one or more processors; one or more non-transitory machine-readable media storing data and executable instructions (see at least " The disclosed systems and methods may be implemented using a combination of conventional hardware and software as well as specialized hardware and software, such as a machine constructed and/or programmed specifically for performing functions associated with the disclosed method steps. Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executable by at least one processing device and perform any of the steps and/or methods described herein." in par. 0005) wherein the data comprises image data that includes at least one image frame of a portion of an instrument and an environmental structure with which the instrument is adapted to interact (see at least "For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure, to determine a condition of an anatomical structure, to determine pressure applied to an anatomical structure, or to determine any other information where multiple viewing angles may be beneficial." in par. 0088), and wherein the instructions, when executed by the processor, cause the processor to perform a method comprising: classifying a contact condition between the portion of the instrument and the environmental structure based on the at least one image frame (see at least “In another example, the distance between the medical instrument and the anatomical structure may be compared with a selected threshold, and distinguishing the first group of frames from the second group of frames may further be based on a result of the comparison. For example, the threshold may be selected based on the type of the medical instrument, the type of the anatomical structure, the type of the surgical procedure, and so forth. In other embodiments, distinguishing the first group of frames from the second group of frames may further be based on a detected interaction between the medical instrument and the anatomical structure. An interaction may include any action by the medical instrument that may influence the anatomical structure, or vice versa. For example, the interaction may include a contact between the medical instrument and the anatomical structure… An example of such training example may include a video clip of a surgical procedure, together with a label indicating the presence of particular interactions between medical instruments and anatomical structures in the video clip, or together with a label indicating the absence of particular interactions between medical instruments and anatomical structures in the video clip.” In par. 0205); estimating a spatial position or displacement of the portion of the instrument to provide an estimated spatial position or displacement for the instrument, in which the estimated spatial position or displacement for the instrument is based on the image data (see at least "For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure, to determine a condition of an anatomical structure, to determine pressure applied to an anatomical structure, or to determine any other information where multiple viewing angles may be beneficial." in par. 0088); and responsive to the classified contact condition between the instrument and the environmental structure, estimating a measure of force between the instrument and the environmental structure based on the estimated spatial position or displacement for the instrument. (see at least "For example, visual action recognition algorithms may be used to analyze the video and detect the interactions between the medical instrument and the anatomical structure." in par. 0115 and “Consistent with the present embodiments, estimating contact force may include analyzing images and/or surgical video to generate an estimate of a magnitude of an actual contact force according to a scale. Force estimation through image analysis may involve an examination of a tissue/modality interface to observe an effect on the tissue. For example, if the modality is a medical instrument such as forceps pressing against an organ such as a gallbladder, machine vision techniques applied to the location of force application may reveal movement and/or changes of the organ that is reflective of the force applied.” In par. 0584) Regarding Claim 14, Wolf also teaches: A system for implementing the method of Claim 5 (see Claim 5 analysis for rejection of the method) Regarding Claim 15, Wolf also teaches: A system for implementing the method of Claim 3 (see Claim 3 analysis for rejection of the method) Regarding Claim 18, Wolf also teaches: A system for implementing the method of Claim 12 (see Claim 12 analysis for rejection of the method) Regarding Claim 19, Wolf also teaches: The system of claim 18, further comprising: the instrument, in which the instrument comprises a robotically controlled instrument and the environmental structure comprises biological tissue and/or other structures on or within a region of interest (see at least "A surgeon may include any person performing a surgical procedure, including a doctor or other medical professional, any person assisting a surgical procedure, and/or a surgical robot." in par. 0535 and “Video footage may depict any aspect of a surgical procedure and may depict a patient (internally or externally), a medical professional, a robot, a medical tool, an action, and/or any other aspect of a surgical procedure.” In par. 0537); and a user interface device in communication with the robotically controlled instrument, the user interface device configured to provide the sensory perceptible feedback. (see at least "In some embodiments, a method for estimating contact force on an anatomical structure may include outputting a notification based on a determination that an indication of actual contact force exceeds a selected contact force threshold. Outputting a notification may include transmitting a recommendation to a device, displaying a notification at an interface, playing a sound, providing haptic feedback, and/or any other method of notifying an individual of excessive force applied." in par. 0600) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6-7, 16-17, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wolf et al (US 20200273575, hereinafter Wolf) in view of Kingston et al (US 20220339790, hereinafter Kingston). Regarding Claim 6, Wolf teaches: 6. The method of claim 1, wherein estimating the spatial position further comprises: applying a keypoints model to the at least one image frame to extract a set of geometric points of the portion of the instrument, in which the set of geometric points represents a geometric relationship of the geometric points of the instrument (see at least " In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth." in par. 0082) ; wherein estimating the measure of force comprises estimating the measure of force based on the three-dimensional normalized position for the portion of the instrument. (see at least "For example, motion detection algorithms may be used to estimate the motion of the medical instrument in the video and to estimate the motion of the anatomical structure in the video, and the estimated motion of the medical instrument may be compared with the estimated motion of the anatomical structure to determine the relative movement." in par. 0204 and “In some exemplary embodiments of the present disclosure, distinguishing the first group of frames from the second group of frames may further be based on a detected relative position between the medical instrument and the anatomical structure.” In par. 0205 and “Consistent with the present embodiments, estimating contact force may include analyzing images and/or surgical video to generate an estimate of a magnitude of an actual contact force according to a scale. Force estimation through image analysis may involve an examination of a tissue/modality interface to observe an effect on the tissue. For example, if the modality is a medical instrument such as forceps pressing against an organ such as a gallbladder, machine vision techniques applied to the location of force application may reveal movement and/or changes of the organ that is reflective of the force applied.” In par. 0584) Wolf does not appear to explicitly teach all of the following, but Kingston does teach: determining pixel coordinates in the at least one image frame based on the extracted set of geometric points (see at least "Determining the coordinate position of the effector feature in the first coordinate system may further include sampling the first image and the second image every M lines of pixels, where M is an integer greater than or equal to 2, and detecting at least one edge on each sampled line of the first image and the second image." in par. 0119); and applying a position estimation model to provide a three-dimensional normalized position for the portion of the instrument based on the pixel coordinates in the at least one image frame, (see at least "By performing triangulation, computing system 102 may be configured to determine the three-dimensional position of the effector feature/nozzle 300 position, or specific portion of effector feature/nozzle 300 position, e.g., the position of tip 400, etc., in three degrees of freedom (DOF) based on the geometric relationship between a first camera 126 and a second camera 128 within image system 121." in par. 0106) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Wolf to incorporate the teachings of Kingston wherein the 3D pose of the instrument attached to the robot based on a plurality of pixel coordinates identified as geometric points on the instrument. The motivation to incorporate the teachings of Kingston would be to improve accuracy and reduce error in instrument positioning (see par. 0033) Regarding Claim 7, Wolf as modified by Kingston teaches: 7. The method of claim 6, Wolf further teaches: wherein the three-dimensional normalized position for the portion of the instrument includes a plurality of discrete normalized position values determined by the position estimation model over a time interval (see at least " In various cases, when a camera (e.g., camera 115) tracks a moving or deforming object (e.g., when camera 115 tracks a moving surgical instrument" in par. 0308), and estimating the measure of force comprises: applying a force estimation model to plurality of discrete normalized position values to predict the measure of force (see at least " Moreover, the force may be estimated at discrete points in time or may be estimated continuously. In some embodiments, an estimate of a contact force may include an estimate of a contact location, a contact angle, and/or an estimate of any other feature of contact force." in par. 0584), in which the force estimation model comprises a neural network trained on state information derived from the instrument and/or position measurements of the instrument with corresponding labeled images representative of an instrument interacting with an object. (see at least "Based on historical video footage from prior procedures where force application was previously observed, an estimate of the magnitude of force applied can be made for the current video. The force magnitude estimate may include a unit of measurement (e.g., pounds, pounds per square inch, Newtons, kilograms, or other physical units) or may be based on a relative scale. A relative scale may include a categorical scale, a numeric scale, and/or any other measure. A categorical scale may reflect a level of force (e.g., a scale including multiple levels such as a high force, a medium force, a low force, or any other number of levels). A contact force may be estimated according to a numerical scale such as a scale of 1-10. Moreover, the force may be estimated at discrete points in time or may be estimated continuously. In some embodiments, an estimate of a contact force may include an estimate of a contact location, a contact angle, and/or an estimate of any other feature of contact force." in par. 0584 and “some embodiments, a method for estimating contact force on an anatomical structure may include analyzing received image data to determine an identity of an anatomical structure reflected in image data. Analyzing received image data may include any method of image analysis, consistent with the present embodiments. Some non-limiting examples of algorithms for identifying anatomical structures in images and/or videos are described above. Analyzing received image data may include, for example, methods of object recognition, image classification, homography, pose estimation, motion detection, and/or other image analysis methods. Analyzing received image data may include artificial intelligence methods including implementing a machine learning model trained using training examples, consistent with disclosed embodiments…For example, received image data may be analyzed using an artificial neural network configured to detect and/or identify an anatomical structure from images and/or videos.” In par. 0586) Regarding Claim 16, Wolf as modified by Kingston also teaches: A system for implementing the method of Claim 6 (see Claim 6 analysis for rejection of the method) Regarding Claim 17, Wolf as modified by Kingston also teaches: A system for implementing the method of Claim 7 (see Claim 7 analysis for rejection of the method) Regarding Claim 20, Wolf teaches: 20. A system (see at least "Embodiments of this disclosure include systems, methods, and computer readable media for estimating contact force on an anatomical structure during a surgical procedure disclosed." in par. 0022), comprising: an imaging device configured to provide image data including a plurality of image frames, in which the image frames include a remotely control instrument and a deformable structure (see at least " For example, video/image data obtained from different view angles may be used to determine the position of the surgical instrument relative to a surface of the anatomical structure, to determine a condition of an anatomical structure, to determine pressure applied to an anatomical structure, or to determine any other information where multiple viewing angles may be beneficial." in par. 0088 and “A specific action may include any action performed by a surgeon (e.g., a human or robotic surgeon) during a surgical procedure, or by a person or robot assisting a surgical procedure.” In par. 0550) ; a computing apparatus including instructions stored in non-transitory memory, which are executable by a processor (see at least " The disclosed systems and methods may be implemented using a combination of conventional hardware and software as well as specialized hardware and software, such as a machine constructed and/or programmed specifically for performing functions associated with the disclosed method steps. Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executable by at least one processing device and perform any of the steps and/or methods described herein." in par. 0005), wherein the instructions comprise: a contact detection model that classifies a contact condition between a portion of the instrument and the deformable structure based on at least one image frame (see at least "At step 940, process 900 may include distinguishing the first group of frames from the second group of frames based on the relative movement, wherein the first group of frames includes surgical activity frames and the second group of frames includes non surgical activity frames. " in par. 0204 and “For example, the interaction may include a contact between the medical instrument and the anatomical structure, an action by the medical instrument on the anatomical structure (such as cutting, clamping, applying pressure, scraping, etc.), a reaction by the anatomical structure (such as a reflex action), or any other form of interaction.” In par. 0205); a keypoint identification model that generates keypoints data based on the at least one image frame, in which the keypoints data defines a geometric network of keypoints of the instrument and includes coordinates of pixels or voxels in the at least one image frame (see at least " In some examples, the image data may be preprocessed to extract edges, and the preprocessed image data may comprise information based on and/or related to the extracted edges. In some examples, the image data may be preprocessed to extract image features from the image data. Some non-limiting examples of such image features may comprise information based on and/or related to edges; corners; blobs; ridges; Scale Invariant Feature Transform (SIFT) features; temporal features; and so forth." in par. 0082); a force estimation model that generates a predicted force estimate responsive to the classified contact condition between the instrument and the deformable structure, the predicted measure of force between the instrument and the environmental structure being determined based on the predicted position estimate. (see at least "For example, motion detection algorithms may be used to estimate the motion of the medical instrument in the video and to estimate the motion of the anatomical structure in the video, and the estimated motion of the medical instrument may be compared with the estimated motion of the anatomical structure to determine the relative movement." in par. 0204 and “In some exemplary embodiments of the present disclosure, distinguishing the first group of frames from the second group of frames may further be based on a detected relative position between the medical instrument and the anatomical structure.” In par. 0205 and “Consistent with the present embodiments, estimating contact force may include analyzing images and/or surgical video to generate an estimate of a magnitude of an actual contact force according to a scale. Force estimation through image analysis may involve an examination of a tissue/modality interface to observe an effect on the tissue. For example, if the modality is a medical instrument such as forceps pressing against an organ such as a gallbladder, machine vision techniques applied to the location of force application may reveal movement and/or changes of the organ that is reflective of the force applied.” In par. 0584) Wolf does not appear to explicitly teach all of the following, but Kingston does teach: a position estimation model that generates a predicted position estimate representative of a spatial position or displacement of the portion of the instrument to, in which the estimated spatial position or displacement for the instrument is based on the keypoints data (see at least "By performing triangulation, computing system 102 may be configured to determine the three-dimensional position of the effector feature/nozzle 300 position, or specific portion of effector feature/nozzle 300 position, e.g., the position of tip 400, etc., in three degrees of freedom (DOF) based on the geometric relationship between a first camera 126 and a second camera 128 within image system 121." in par. 0106 and " Determining the coordinate position of the effector feature in the first coordinate system may further include sampling the first image and the second image every M lines of pixels, where M is an integer greater than or equal to 2, and detecting at least one edge on each sampled line of the first image and the second image." in par. 0119); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Wolf to incorporate the teachings of Kingston wherein the 3D pose of the instrument attached to the robot based on a plurality of pixel coordinates identified as geometric points on the instrument. The motivation to incorporate the teachings of Kingston would be to improve accuracy and reduce error in instrument positioning (see par. 0033) Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wolf et al (US 20200273575, hereinafter Wolf) in view of Kingston et al (US 20220339790, hereinafter Kingston) and Eldridge et al (US 20090118864, hereinafter Eldridge) Regarding Claim 8, Wolf as modified by Kingston teaches: 8. The method of claim 6, Wolf and Kingston does not appear to explicitly teach all of the following, but Eldridge does teach: wherein the position estimation model further is programmed to provide the three-dimensional normalized position for the portion of the instrument based on additional position data, in which the additional position data is representative of a sensed position for the portion of the instrument and/or measured parameters associated with a joint space for the instrument. (see at least "The forward kinematics of the robot 102 may define the transformation from joint space to task space. Usually, however, the task is specified in task space, and a computer decides how to move the robot in order to accomplish the transformation from joint space to task space. The transformation is typically done via the inverse kinematics of the robot 102, which maps task space to joint space. Both the forward and inverse transformations depend on the kinematic model of the robot 102, which will typically differ from the physical system to some degree." in par. 0033 and “The camera 206 is used to visually observe the tool 214 to enforce, and/or calculate a deviation from, the specified geometric constraint for the TCP of the tool for the plurality of wrist poses Wr.sub.i 218.” In par. 0044) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Wolf as modified by Kingston to incorporate the teachings of Eldridge wherein both joint angle data and data from the camera detecting the robot tool is used to determine the position of the tool. The motivation to incorporate the teachings of Eldridge would be to improve accuracy of end effector/tool position determination (see par. 0070). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN M KATZ whose telephone number is (571)272-2776. The examiner can normally be reached Mon-Thurs. 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN M KATZ/Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Dec 11, 2025
Non-Final Rejection — §102, §103
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596378
Autonomous Control and Navigation of Unmanned Vehicles
2y 5m to grant Granted Apr 07, 2026
Patent 12594663
ROBOT SYSTEM AND CART
2y 5m to grant Granted Apr 07, 2026
Patent 12589499
Mobile Construction Robot
2y 5m to grant Granted Mar 31, 2026
Patent 12589491
METHODS, SYSTEMS, AND DEVICES FOR MOTION CONTROL OF AT LEAST ONE WORKING HEAD
2y 5m to grant Granted Mar 31, 2026
Patent 12582491
CONTROL OF A SURGICAL INSTRUMENT HAVING BACKLASH, FRICTION, AND COMPLIANCE UNDER EXTERNAL LOAD IN A SURGICAL ROBOTIC SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+20.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month