Prosecution Insights
Last updated: April 19, 2026
Application No. 18/030,919

SURGICAL ROBOT, AND GRAPHICAL CONTROL DEVICE AND GRAPHIC DISPLAY METHOD THEREFOR

Non-Final OA §101§102§103§112
Filed
Oct 31, 2023
Examiner
JUNG, JAEWOOK
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Shenzhen Edge Medical Co. Ltd.
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-18.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “100” has been used to designate both the robot arm of Fig. 39 (specification contains no mention of “100” for robot arm in [0268] when discussing Fig. 39) and the virtual camera of Fig. 6 ([0104]). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because the description exceeds the 150-word limit. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Objections Claims 14 and 18 are objected to because of the following informalities: a. Claim 14 claims “a fix position” where it appears to mean “a fixed position”. b. Claim 18 claims “an warning threshold” where it appears to mean “a warning threshold”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3, 10, and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 claims the limitation “acquire a projection point of the second position of each of the plurality of feature points on the projection plane according to the virtual focal length and the contour information”. However, neither claim 3 nor its parent claim, claim 1, contain introduction of “a virtual focal length”, where the first mention of a virtual focal length is seen in claim 2 of a different claim tree. Examiner recommends to either amend the claim to introduce “a virtual focal length” or amend claim 3 to be dependent to claim 2 as the claims disclose similar matter. For prior art examination, examiner will read “the virtual focal length” as “a virtual focal length”. Claim 10 claims “an unmatched second feature point”. However, examiner does not see any mention within the claim tree to “an unmatched first feature point” prior to the introduction of “an unmatched second feature point”. Examiner recommends rewording the unmatched second feature point to be “an unmatched feature point” as the claim does not appear. For prior art examination, examiner will read “an unmatched second feature point” as “an unmatched feature point”. Claim 13 claims “a union space of the reachable workspace of the operating arm”. However, examiner does not understand what is meant by a union space as a union is typically referred to the joining of two objects, where a union space appears to be a reachable workspace of a plurality of arms. Examiner recommends clarifying what the union being performed here is. For prior art examination, examiner will interpret “a union space of the reachable workspace of the operating arm” as “a space of the reachable workspace of the operating arm”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 20 is directed toward an apparatus. Therefore, independent claim 20 is directed to a statutory category of invention under Step 1. Under Step 2A, Prong 1, the claims are analyzed to determine whether one or more of the claims recites subject matter that falls within one of the following groups of abstract idea: (1) mental processes, (2) certain methods of organizing human activity, and/or (3) mathematical concepts. In this case, the independent claim 20 is directed to an abstract idea without significantly more. Specifically, the claim, under their broadest reasonable interpretation cover certain mental processes/organizing human activity/mathematical concepts. The language of independent claim 20 is used for illustration: a memory for storing computer programs; and a processor for loading and executing the computer programs; wherein the computer programs are configured to be loaded and executed by the processor to perform the graphic display method as claim 19 As seen in claim 19, the steps above recite mental processes described by the steps performable by the configured controller: obtain, determine, fit, and connect. Each of these steps can be directed towards a mental task performable such as observing, thinking, and reasoning. As explained above, independent claim 20 under step 2A, prong 1.Under Step 2A, Prong 2, the claims are analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements such as merely using a computer to implement an abstract idea, adding insignificant extra solutions activity, or generally linking use of a judicial exception to a particular technological environment or a field of use do not integrate a judicial exception into a “practical application”; see at least MPEP 2106.04(d). This judicial exception is not integrated into a practical application. In particular, the claim only recites the elements of — using a processor to perform the listed steps. The processor in all steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer functions) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose meaningful limits on practicing the abstract idea. Additionally, the limitation of “display the projected image on the display” does not provide sufficient improvement to technology (see MPEP 2106.05(a)(II), “Gathering and analyzing information using conventional techniques and displaying the result”) The claim is directed to an abstract idea. Therefore, taken alone, the additional elements do not integrate the abstract idea into a practical application. Furthermore, looking at the additional limitations as an ordered combination or as a whole, the limitations add nothing significant that is not already present when looking at the elements taken individually. Because the additional elements, do not integrate the abstract idea into a practical application by imposing meaningful limits on practicing the abstract idea, independent claims 1, 19, and 20 are directed to an abstract idea. Under Step 2B, the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application in Step 2A, Prong Two, the additional element of limiting the use of the idea to one particular environment employs generic computer functions to execute an abstract idea and, therefore, does not add significantly more. Limiting the use of the abstract idea to a particular environment or field of use cannot provide an inventive concept. Examiner encourages Applicant to set an interview to discuss potential amendments for overcoming the above rejections under 35 U.S.C. § 101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-7, 11, and 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US20150065793A1 (Diolaiti). Regarding claims 1 and 19, Diolaiti discloses a surgical robot and a graphic display method of a surgical robot, the robot comprising comprising: an input portion; Diolaiti discloses an input portion ([0038], console 10). a display; Diolaiti discloses a display ([0038], 3-D monitor 104). an operating arm comprising a plurality of joints and a plurality of sensors, Diolaiti discloses an operating arm ([0041], articulatable surgical tools 241) comprising a plurality of joints (Fig. 3, joints 343, 345, 347) and a plurality of sensors ([0067], “the sensors may be included in the instruments, 211, 231, 241 …”). the plurality of sensors being configured to sense joint variables of the plurality of joints, [0067] of Diolaiti, “such as rotation sensors that sense rotational movement of rotary joints and linear sensors that sense linear movement of prismatic joints in the instruments 211, 231, 241” the operating arm further comprising a feature point sequence composed of a plurality of feature points, the plurality of feature points being arranged orderly, and each of the plurality of joints being associated with at least one of the plurality of feature points; and See Fig. 3 of Diolaiti, where the operating arm comprises a feature point sequence composed of a plurality of feature points (joints 343, 345, and 347). a controller, coupled to each of the input portion, the display, and the plurality of sensors, the controller being configured to: Diolaiti discloses a controller ([0038], “a processor (also referred to herein as a “controller”) 102.”) coupled to each of the input portion (console 10), the display (3-D monitor 104), and the plurality of sensors ([0098], “sensors associated with the joints of the input device 108 sense such movement at sampling intervals (appropriate for the processing speed of the controller 102 and camera control purposes)”). obtain the feature point sequence of the operating arm and a kinematic model corresponding to the feature point sequence; See Fig. 10 of Diolaiti. The feature point sequence is obtained by the use of sensors associated with the joints ([0067]) to further obtain a kinematic model corresponding to the feature point sequence ([0070], instrument link positions and orientations 1005). obtain the joint variables sensed by the plurality of sensors, and See Fig. 10 of Diolaiti, where instrument joint positions 1001 are the joint variables obtained through the plurality of sensors ([0067]). obtain a virtual camera selected by the input portion; [0092] of Diolaiti, “As another possible reference frame that may be used, FIG. 21 illustrates an "isometric auxiliary view" reference frame 2102 which corresponds to a viewing point of the auxiliary view being displayed on the auxiliary display 140 (such as shown in FIG. 12) and/or monitor 104 (such as shown in window 1502 of FIG. 15).”, where the view contains a virtual camera 2103. determine a projection point of each of the plurality of feature points in the feature point sequence on a projection plane of the virtual camera according to the kinematic model and the joint variables; See Figs. 12 and 21 of Diolaiti. Fig. 12 represents a two-dimensional exemplary view of a virtual camera (see the camera in Fig. 21) that comprises a projection point of each of the plurality of feature points in the feature point sequence on a projection plane of the virtual camera, where each point’s position is a result of the kinematic model and the joint variables. fit and connect the projection point of each of the plurality of feature points orderly to generate a projected image of the operating arm; and See Fig. 12 of Diolaiti, where each projection point of each of the plurality of feature points are orderly fit and connected by links to generate a projected image of the operating arm. display the projected image on the display. Diolaiti discloses displaying the projected image on the display ([0072], “auxiliary display screen 140”). Regarding claim 2, with all of the limitations of claim 1, the robot further comprises: when determining the projection point of each of the plurality of feature points in the feature point sequence on the projection plane of the virtual camera according to the kinematic model and the joint variables, the controller is further configured to: acquire a first position of each of the plurality of feature points of the feature point sequence in a reference coordinate system based on the kinematic model and the joint variables; Diolaiti discloses acquiring a first position of each of the plurality of feature points of the feature point sequence ([0067]) in a reference coordinate system based on the kinematic model ([0069], kinematic models 1003) and the joint variables ([0069], joint positions 1001), convert the first position of each of the plurality of feature points to a second position in a coordinate system of the virtual camera; See Fig. 21 and [0092] of Diolaiti. Diolaiti discloses the use of virtual camera 2103 to view reference frame 2102, where Fig. 12 of Diolaiti shows an exemplary view. In Fig. 12, Diolaiti discloses that the plurality of feature points converted from the first position to a second position in a coordinate system of the virtual camera. acquire a virtual focal length of the virtual camera and determine the projection plane of the virtual camera based on the virtual focal length; and See Fig. 21 of Diolaiti, where the virtual focal length of the virtual camera can be seen as the line between the focal point 2104 ([0092]) and camera 2103. Furthermore, Diolaiti discloses that the position and orientation of virtual camera 2103 are computed to obtain a selected azimuth angle α, where the projection plane of the virtual camera is based on both the virtual focal length and angle α. acquire a projection point of the second position of each of the plurality of feature points on the projection plane based on the virtual focal length. See [0092] of Diolaiti, where virtual camera 2103 is configurable to have different positions and orientations, causing a different focal length, and provides auxiliary views such as the one in Fig. 12 that would be affected by a change in perspective of the camera. Regarding claim 3, with all of the limitations of claim 1, the robot further comprises: wherein when determining the projection point of each of the plurality of feature points in the feature point sequence on the projection plane of the virtual camera according to the kinematic model and the joint variables, the controller is further configured to: acquire a first position of each of the plurality of feature points in a reference coordinate system based on the kinematic model and the joint variables; Diolaiti discloses acquiring a first position of each of the plurality of feature points of the feature point sequence ([0067]) in a reference coordinate system based on the kinematic model ([0069], kinematic models 1003) and the joint variables ([0069], joint positions 1001), convert the first position of each of the plurality of feature points to a second position in a coordinate system of the virtual camera; See Fig. 21 and [0092] of Diolaiti. Diolaiti discloses the use of virtual camera 2103 to view reference frame 2102, where Fig. 12 of Diolaiti shows an exemplary view. In Fig. 12, Diolaiti discloses that the plurality of feature points converted from the first position to a second position in a coordinate system of the virtual camera. acquire a contour information of each of the plurality of joints corresponding to the corresponding feature points which is associated with the joint; See Fig. 12 of Diolaiti, where the view of the virtual camera 2103 discloses the contour information of each of the plurality of joints corresponding to the corresponding feature points. acquire a projection point of the second position of each of the plurality of feature points on the projection plane according to the virtual focal length and the contour information; and See [0092] of Diolaiti, where virtual camera 2103 is configurable to have different positions, causing a different focal length, and orientations, causing different contour information, and provides auxiliary views such as the one in Fig. 12 that would be affected by a change in perspective of the camera. when fitting and connecting the projection point of each of the plurality of feature points orderly to generate the projected image of the operating arm, the controller is further configured to: fit and connect the projection point of each of the plurality of feature points orderly to generate the projected image of the operating arm according to the contour information and an order of the plurality of feature points corresponding to the projection points of the plurality of feature points in the feature point sequence. See Fig. 12 of Diolaiti, where the projection point of each of the plurality of feature points are fitted and connected to generate the projected image of the operating arm according to the contour information with an order of the plurality of feature points corresponding to the projection points of the plurality of feature points in the feature point sequence ([0058], “first joint 323”, “second joint 325”), where Diolaiti discloses an ordered connection ([0057], “first and second joint assemblies (also referred to herein simply as joints”) Regarding claim 5, with all of the limitations of claim 1, the robot further comprises: wherein the virtual camera has a virtual focal length and/or See Fig. 21 of Diolaiti, where the virtual focal length is the distance between virtual camera 2103 and focal point 2104. which is selectable, [0092] of Diolaiti, “In particular, the location of the focal point 2104 along the longitudinal axis 2101 and the size of the azimuth angle α are selected” when determining the projection point of each of the plurality of feature points in the feature point sequence on the projection plane of the virtual camera according to the kinematic model and the joint variables, the controller is further configured to: acquire the virtual focal length and/or the virtual aperture selected by the input portion; and See the citation “which is selectable” within claim 5 above. determine the projection point of each of the plurality of feature points in the feature point sequence on the projection plane of the virtual camera according to the virtual focal length and/or the kinematic model, and the joint variables. Diolaiti discloses determining the projection point of each of the plurality of feature points in the feature point sequence on the projection plane of the virtual camera (see Fig. 12 of Diolaiti, where the projection points of each of the feature points are shown in the plane of the virtual camera 2103) according to the virtual focal length, the kinematic model ([0069], kinematic model 1003) and the joint variables ([0067], sensors included in 211, 231, and 241 to sense joint movement). Regarding claim 11, with all of the limitations of claim 1, the robot further comprises: acquire a maximum range of motion in a first orientation of the operating arm; Diolaiti discloses providing a visual indicator to indicate that an undesirable event or condition such as nearing a limit of its range of motion (maximum range of motion) is close to occurring ([0077]), where a warning could only be indicated if a maximum range of motion was acquired in a first orientation of the operating arm. calculate an amount of the motion in the first orientation of the operating arm based on the joint variables and the kinematic model; See Fig. 14 of Diolaiti, where an amount of motion was calculated and displayed onto the perspective from a first orientation, where the figure particularly identifies a portion 1402 of the surgical tool highlighted to indicate reaching a maximum limit of motion ([0079]). generate an icon based on the maximum range of motion and the amount of the motion in the first orientation; and See citation above regarding “calculate an amount …”, where Fig. 14 of the citation generated an icon (highlighted portion 1402) based on the maximum range of motion and the amount of motion in the first orientation. display the icon on the display. See Fig. 14 of Diolaiti, where the icon is displayed. Regarding claim 18, with all of the limitations of claim 1, the robot further comprises: wherein the controller is further configured to: mark at least partial of a first operating arm in the projected image and display the first operating arm on the display when the first operating arm of the operating arm reaches a threshold of an event; See Fig. 14 of Diolaiti, where Diolaiti marks at least partial of a first operating arm in the projected image (see bubble 1401 ([0082])) and displays the first operating arm on the display when the first operating arm of the operating arm reaches a threshold of an event ([0082], “As another example, FIG. 14 shows a semi-translucent sphere or bubble 1401 (preferably colored red) which is displayed by the method as part of the rendering process when a warning threshold is reached so as to indicate to the operator that the highlighted portions 1402, 1403 of the surgical tool 241 and camera 211 are dangerously close to colliding.”) the threshold is an warning threshold, and the event is a situation to be avoided; See citation above to [0082] of Diolaiti and Fig. 14 of Diolaiti. the warning threshold is based on a range of motion of at least one of the plurality of joints in the first operating arm, and the situation to be avoided is a limitation of a range of motion of at least one of the plurality of joints; or the warning threshold is based on a distance between the first operating arm and a second operating arm of the operating arm, the situation to be avoided is a collision between the first operating arm and the second operating arm. In view of the citation above within claim 18, Diolaiti discloses the warning threshold is based on a range of motion of at least one of the plurality of joints (Fig. 14, portions 1402, 1403) in the first operating arm, and the situation to be avoided is a limitation of a range of motion of at least one of the plurality of joints ([0082]). Regarding claim 20, Diolaiti discloses a graphical control device of a surgical robot, comprising: a memory for storing computer programs; and [0075] of Diolaiti, “For example, the GUI 170 or voice recognition system 160 may be adapted to provide an interactive means for the Surgeon to select the viewing mode and/or change the viewing point of an auxiliary view of the articulatable camera 211 and/or articulatable surgical tools 231, 241 as they extend out of the distal end of the entry guide 200.”, where a selection indicates stored functions. a processor for loading and executing the computer programs; See [0045] of Diolaiti. wherein the computer programs are configured to be loaded and executed by the processor to perform the graphic display method as claim 19. See [0045] of Diolaiti. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 8-10, and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over US20150065793A1 (Diolaiti). Regarding claim 8, with all of the limitations of claim 1, the robot further comprises: wherein the operating arm comprises a camera arm with an image end instrument, and the controller is further configured to: acquire a camera parameter of the image end instrument of the camera arm, and calculate a visible area of the image end instrument according to the camera parameter, the camera parameter comprising a focal length and an aperture; Diolaiti discloses acquiring a camera parameter of the image end instrument of the camera arm, (see Fig. 1 of Diolaiti, where the robotic arm is entering entry aperture 150 with a focal point of choice ([0092]) and calculate a visible area of the image end instrument according to the camera parameter (see Fig. 15, where window 1501 captures anatomic structure 360). determine a pose and position of the image end instrument in a reference coordinate system based on the joint variables and the kinematic model of the camera arm; See the citation of claim 1 regarding “obtain the joint variables …” and monitor 1502 of Fig. 15, where in the monitor a pose and position of the image end instrument (camera 211) in a reference coordinate system is determined based on the sensed joint variables and kinematic model of the camera arm. convert the visible area of the image end instrument to a visible area of the virtual camera based on a conversion relationship between the pose and position of the image end instrument and the pose and position of the virtual camera in the reference coordinate system; and While Diolaiti does not explicitly disclose converting a visible area of the image end instrument to a visible area of the virtual camera based on a conversion relationship between the pose and position of the image end instrument and the pose and position of the virtual camera in the reference coordinate system, Fig. 21 of Diolaiti does disclose a selected position of a virtual camera 2103 at a particular elevation, focal point, and azimuth angle ([0092]), where the figure also contains camera instrument 211. The perspective of the virtual camera can be seen in at least Fig. 15, where the virtual camera is at the particular location to view the other links and camera instrument 211 on the bottom window 1502 and camera instrument view is seen in 1501. One of ordinary skill in the art would find it obvious that Diolaiti has a conversion relationship between the pose and position of the image end instrument and the pose and position of the virtual camera as Fig. 15 discloses a perspective view of virtual camera 2103 at a particular elevation, focal point, and azimuth angle comprising a visible area of the image end instrument at a pose and position disclosed in the figure calculate a boundary line of the visible area of the virtual camera on the projection plane, and display the boundary line in the projected image on the display. While Diolaiti does not explicitly disclose calculating a boundary line of the visible area of the virtual camera on the projection plane and displaying the boundary line in the projected image on the display, Diolaiti discloses a boundary line of the visible area of the camera instrument 211 on the projection plane (Fig. 17, image 1700 contains a dotted line rectangular boundary line of the visible area), and displaying the boundary line in the projected image on the display. One of ordinary skill in the art would find it obvious that there must be a calculation of a boundary line of the visible area of the virtual camera on the projection plane if there is a boundary line displayed in the projected image. Regarding claim 9, with all of the limitations of claim 1, the robot further comprises: wherein the operating arm comprises a camera arm with an image end instrument and a surgical arm with an operating end instrument; See at least Fig. 12 for an exemplary operating arm comprising a camera arm with an image end instrument and a surgical arm with an operating end instrument. when fitting and connecting the projection point of each of the plurality of feature points orderly to generate the projected image of the operating arm, the controller is further configured to: acquire an operating image of a surgical area captured by the image end instrument of the camera arm; See Fig. 17 of Diolaiti, where an operating image of a surgical area is captured by the image end instrument of the camera arm. identify a feature portion of the surgical arm from the operating image; See Fig. 17. Diolaiti discloses identifying feature portions (portions 1731, 1741) of the surgical arm from the operating image ([0080]). match out an associated first feature point from the feature point sequence according to the identified feature portion; and See Fig. 17 of Diolaiti, where the feature portion contains an associated first feature point from the feature point sequence according to the identified feature portion. fit and connect the projection point of each of the plurality of feature points orderly, mark a first projection point from the projection points associated with the first feature point, and mark a line segment connected to the first projection point to generate the projected image of the operating arm. Diolaiti discloses fitting and connecting the projection point of each of the plurality of feature points orderly (see at least Fig. 12 of Diolaiti and [0057], first and second joint assemblies 323, 325) and marking a first projection point from the projection points associated with the first feature point (see Fig. 17 of Diolaiti), and marking a line segment connected to the first projection point to generate the projected image of the operating arm (see Fig. 17 of Diolaiti, where line segments are connected to the first projection point). Regarding claim 10, with all of the limitations of claim 9, the robot further comprises: wherein the feature point sequence comprises an unmatched second feature point, after matching out the associated first feature point from the feature point sequences based on the identified feature portion, the controller is further configured to: acquire the unmatched second feature point; In light of the rationale of claim 9, one of ordinary skill in the art would find it obvious that a system that obtains a first feature point, where each feature point has its own sensor ([0067], “the information may be provided by sensors coupled to joints …”), would also obtain a second feature point by use of a second joint sensor. generate an image model of the feature portion according to the contour information, the joint variables, and the kinematic model of the feature portion corresponding to the second feature point; See Fig. 17, where an image model of the feature portion is generated according to the contour information (Diolaiti, Fig. 10, instrument link positions and orientations 1005), the joint variables (Diolaiti, Fig. 10, instrument joint positions 1001) and the kinematic model of the feature portion (Diolaiti, Fig. 10, instrument kinematic model 1003) corresponding to the second feature point. convert the image model to a supplementary image in a coordinate system of the image end instrument; See Fig. 17 of Diolaiti, where a supplementary image in a coordinate system of the image end instrument is shown outside of 1700. splice the supplementary image with an image of the feature portion corresponding to the first feature point based on the sequence of the first and the second feature points in the feature point sequence to generate a complete sub-image of the operating arm in the operating image; and While Diolaiti does not explicitly disclose, splicing the supplementary image with an image of the feature portion corresponding to the first feature point based on the sequence of the first and the second feature points in the feature point sequence to generate a complete sub-image of the operating arm in the operating image, Diolaiti discloses splicing the supplementary image (Fig. 17, outside of 1700) with an image of the feature portion (Fig. 17, inside of 1700) to generate a sub-image of the operating arm in the operating image. As Diolaiti tracks each of the feature point sequence ([0067]), one of ordinary skill in the art would find it obvious that the system of Diolaiti may also splice corresponding to the first feature point based on the sequence of the first and second feature points in the feature point sequence to generate a complete sub-image. display the operating image with the complete sub-image of the operating arm on the display. See the rationale within claim 10 above regarding “splice the supplementary image …”. Regarding claim 15, with all of the limitations of claim 1, the robot further comprises: wherein when obtaining the virtual camera selected by the input portion, the controller is further configured to: obtain the virtual camera selected by the input portion and at least two target positions of the virtual camera input by the input portion; See Fig. 14 of Diolaiti, where an exemplary selection of virtual cameras is placed to provide two target positions of the virtual camera for two perspectives of the surgical robot system. This is selected by the use of an input portion (input device 108) during a camera positioning mode ([0086]). determine a target projection point of each of the plurality of feature points in the feature point sequence on a projection plane of the virtual camera at each of the target positions, according to a preset speed of the virtual camera, the kinematic model, and the joint variables; See Fig. 14 of Diolaiti. Diolaiti discloses determining a target projection point of each of the plurality of feature points in the feature point sequence (joints such as 1402 and 1403 of the camera instrument 211 and surgical instruments 241) on a projection plane of the virtual camera (window 1421) at each of the target positions according to a preset kinematic model and joint variables. However, Diolaiti does not explicitly disclose a preset speed of the virtual camera. While Diolaiti may not explicitly disclose a preset speed of the virtual camera, the use of a virtual camera is to provide a perspective view of the entire system and its configuration (such as window 1421, where the joints are close to colliding with each other [0079]). One of ordinary skill in the art would find it obvious that the virtual camera possesses a preset speed that updates its view continuously as the image is disclosed to be real-time on display ([0043]). fit and connect the target projection point of each of the target positions orderly to generate a target projected image of the operating arm; See Fig. 12 of Diolaiti, where the figure shows that the target projection points of each of the target positions are fitted and connected orderly to generate a target projected image of the operating arm. generate an animation based on each target projected image; and In view of the rationale above, one of ordinary skill in the art would find it obvious that the updating real-time image would be an animation as the image becomes the target projected image. play the animation on the display based on a preset frequency. In view of the rationale above, one of ordinary skill in the art would find it obvious that the animation generated from the real-time image on the display would be based on a preset frequency as monitors are known to possess technological refresh rates (measured in Hz). Regarding claim 16, with all of the limitations of claim 1, the robot further comprises: wherein when obtaining the virtual camera selected by the input portion, the controller is further configured to: obtain a motion trace of the virtual camera input by the input portion; While Diolaiti does not explicitly disclose obtaining a motion trace of the virtual camera input by the input portion, Diolaiti discloses obtaining a motion trace of the surgical camera input by the input portion ([0044]) and selecting the position and orientation of the virtual camera to have a desired configuration ([0092]). One of ordinary skill in the art would find it obvious that a motion trace of the virtual camera may also be inputted by the input portion as both virtual and physical cameras are also controllable during the camera positioning mode ([0092]). discrete the motion trace to acquire discrete positions of the virtual camera, the discrete positions being target positions of the virtual camera; In view of the rationale above in claim 16, one of ordinary skill in the art would find it obvious that the virtual camera would be updated to target, discrete positions as it is controlled by the motion trace of the virtual camera input by the input portion as the virtual camera would be updated to follow the trace inputted by an operator. determine a target projection point of each of the plurality of feature points in the feature point sequence on a projection plane of the virtual camera at each of the target positions, according to a preset speed of the virtual camera, the kinematic model, and the joint variables; See equivalent limitation of claim 15. fit and connect the target projection point of each of the target positions orderly to generate a target projected image of the operating arm; See equivalent limitation of claim 15. generate an animation based on each of the target projected images; and See equivalent limitation of claim 15. play the animation on the display based on a preset frequency. See equivalent limitation of claim 15. Regarding claim 17, with all of the limitations of claim 1, the robot further comprises: wherein the operating arm comprises a camera arm with an image end instrument, and the controller is configured to: acquire an operating image of a surgical area captured by the image end instrument; See Fig. 17 of Diolaiti, where the figure discloses acquiring an operating image of a surgical area capture by the image end instrument (camera 211). display the operating image on the display; See Fig. 17 of Diolaiti, where the image is displayed on auxiliary display screen 140. display the projected image suspended on the operating image; See Fig. 17 of Diolaiti, where the projected image (arms of end effectors 341 and 331) displayed suspended on the operating image. when displaying the projected image suspended on the operating image, the controller is further configured to: acquire an overlapping region between the operating image and the projected image, and obtain a first image property of the operating image in the overlapping region; While Diolaiti does not explicitly disclose obtaining a first image property of the operating image in the overlapping region, Diolaiti does disclose acquiring an overlapping region between the operating image and the projected image (see Fig. 17, where image 1700 shows an overlapping region of the projected image onto the operating image) and obtaining a first image property ([0079], portions 1402 and 1403 are highlighted with color to indicate collision) of the region. One of ordinary skill in the art would find it obvious that the system of Diolaiti is capable of also obtaining a first image property of the operating image in the overlapping region. adjust a second image property of the projected image in the overlapping region based on the first image property. In view of the rationale above, Diolaiti further discloses altering more properties of the image such as color, intensity, or frequency of blinking on and off ([0077]). Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over US20150065793A1 (Diolaiti) and further in view of US2014055489A1 (Itkowitz) from the IDS. Regarding claim 4 with all of the limitations of claim 1, the robot further comprises: wherein when fitting and connecting the projection point of each of the plurality of feature points orderly to generate the projected image of the operating arm, the controller is further configured to: acquire a type of the operating arm, and match out an icon of an end instrument of the operating arm according to the type; While Diolaiti discloses acquiring a type of the operating arm (See Fig. 12 of Diolaiti, where articulatable camera 211 and articulatable surgical tools 231, 241 are shown differently), Diolaiti does not disclose matching out of an icon of an end instrument of the operating arm according to the type. However, from a similar field of endeavor, Itkowitz discloses matching out an icon of an end instrument (see Fig. 7 of Itkowitz) of the operating arm according to the type ([0029] of Itkowitz, “tool information may include information of which robotic arm each of the operative tool is operatively coupled to at the time”). One of ordinary skill in the art would find it obvious, prior to the applicant’s effective filing date, to combine the system of Itkowitz and Diolaiti as Itkowitz solves the problem of displaying information relevant to the operations of the system while minimizing visual distractions ([0005] of Itkowitz). determine a pose and position of the end instrument on the projection plane of the virtual camera based on the joint variables and the kinematic model; See Fig. 10 of Diolaiti. Fig. 10 shows the acquisition of the pose and position of the end instruments in step 1005 ([0069]) before rendering the image in 1006. process the icon by rotating and/or zooming based on the pose and position of the end instrument on the projection plane of the virtual camera; and While Diolaiti does not disclose processing the icon by rotating and/or zooming based on the pose and position of the end instrument on the projection plane of the virtual camera, in light of the rationale of claim 4 regarding the limitation “acquire a type …”, Itkowitz discloses processing the icon by rotating ([0035] of Itkowitz, “In particular, the icon 334 has a numeral “2” on it to indicate the tool 33 is operatively coupled to the robotic arm 34, which is designated as robotic arm “2” by the numeral “2” being printed on it as shown in FIG. 2. Likewise, the icon 354 has a numeral “3” on it to indicate the tool 35 is operatively coupled to the robotic arm 36, which is designated as robotic arm “3” by the numeral “3” being printed on it as shown in FIG. 2.”) based on the pose and position of the end instrument (see Fig. 6 of Itkowitz, where the numerals are rotated onto the wrist of the arms.) on the projection plan e of the virtual camera. One of ordinary skill in the art would find it obvious, prior to the applicant’s effective filing date, to combine the system of Itkowitz to the system of Diolaiti as the icons processed in the way disclosed by Itkowitz will allow for a quick reference to relevant to the current operation and procedure of the robot. splice the icon with one projection point located at a distal end of the operating arm to generate the projected image. See the rationale of claim 4 regarding the limitation “process the icon …” Regarding claim 14, with all of the limitations of claim 12, the robot further comprises: wherein the controller is configured to display the projected image on a first display window of the display, and to generate a plurality of icons of the plurality of virtual cameras; and While Diolaiti does explicitly disclose wherein the controller is configured to display the projected image on a first display window of the display (see at least Fig. 14), Diolaiti does not explicitly disclose generating a plurality of icons of the plurality of virtual cameras. However, Diolaiti does provide an exemplary configuration of the surgical system, where virtual camera 2103 is placed in a position with a specific chosen azimuth and focal point ([0092]). One of ordinary skill in the art would find it obvious to try generating a plurality of icons of the plurality of virtual cameras to better track which perspectives of the surgical robot are being viewed and are available on different windows. the plurality of icons has a fix position relative to the projected image, and is configured to move with a change of a viewpoint of the projected image; or While Diolaiti does not explicitly disclose the plurality of icons has a fix position relative to the projected image, and is configured to move with a change of a viewpoint of the projected image, from a similar field of endeavor, Itkowitz discloses rendering a plurality of icons with a fixed position (Itkowitz, Fig. 6, “2” on it to indicate tool 33 is operatively coupled to the robotic arm 34 ([0034])). One of ordinary skill in the art would find it obvious, prior to the applicant’s effective filing date, to combine these two systems as the use of icons fixed relative to the projected image helps to provide clarity while operating the surgical robotic system. Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over US20150065793A1 (Diolaiti) and further in view of US20170165841 (Kamoi). Regarding claim 12, with all of the limitations of claim 1, the robot further comprises: wherein a plurality of virtual cameras selectable by the input portion have difference poses and positions in a reference coordinate system; and See Fig. 14 of Diolaiti, where a top and side perspective are provided at the same time in windows 1421 and 1422 respectively ([0083]), where each virtual camera providing the perspective is selectable ([0092]). the poses and positions of the plurality of virtual cameras in the reference coordinate system are determined by a reachable workspace of the operating arm in the reference coordinate system. Diolaiti discloses the poses and position of the plurality of virtual cameras in the reference coordinate system are determined ([0092], “virtual camera 2103 whose position and orientation is preferably fixed in space during the camera positioning mode.”) but does not disclose the poses and positions determined by a reachable workspace of the operating arm in the reference coordinate system. However, from a similar field of endeavor, Kamoi discloses a robot system for controlling a robot with a video display apparatus for real-time tracking of a robot by camera (Abstract). Specifically, Kamoi discloses the use of a camera position/orientation estimating unit to estimate the position of the camera relative to the robot, where Kamoi discloses the camera is used for shooting a real space containing the robot ([0032]) and to track its reached position and orientation changes to match the virtual object to the real image ([0038]). One of ordinary skill in the art would find it obvious, prior to the applicant’s effective filing date, to combine the system of Kamoi to the system of Diolaiti as the camera position/orientation estimating unit 21 would provide a functional upgrade to the selection of camera position and orientations, providing an effective upgrade. Regarding claim 13, with all of the limitations of claim 12, the robot further comprises: wherein the poses and positions of the plurality of virtual cameras in the reference coordinate system are determined by a union space of the reachable workspace of the operating arm in the reference coordinate system; See the rationale of claim 12. the positions of the plurality of virtual cameras in the reference coordinate system remain outside of the union space, and the poses of the plurality of virtual cameras remain viewing towards the union space; and In view of the rationale of claim 12, see also Fig. 21 of Diolaiti, where virtual camera 2103 appears to be outside of the space of the reachable workspace. Furthermore, one of ordinary skill in the art would find it obvious that the virtual cameras remain outside and oriented toward the space of the reachable workspace as a virtual camera within the reachable workspace would reduce the number of poses and orientations viewable by said cameras, resulting in unviewable configurations for surgery. each of the plurality of virtual cameras has a selectable virtual focal length, See Fig. 21 of Diolaiti and [0092]. the positions of the plurality of virtual cameras are located outside of a first area, the first area is a shortest area determined by the union space visible from the virtual focal length perfectly; or See the rationale of claim 13 regarding “the positions of the plurality of virtual cameras …”, where the space reachable by surgical tool 231 does not encompass the space of virtual camera 2103. Allowable Subject Matter Claims 6-7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art does not teach or suggest, in the context of the claims, increasing the virtual focal length of the virtual camera, and repeat a process of determining the projection point of each of the plurality of feature points in the feature point sequence on the projection plane of the virtual camera according to the virtual focal length and/or the virtual aperture, the kinematic model, and the joint variables before display in response to detecting whether the projected image is distorted. The closest prior art of record US20150065793A1 (Diolaiti) teaches manually selecting a virtual focal length of the virtual camera. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAEWOOK JUNG whose telephone number is (571)272-5470. The examiner can normally be reached Monday - Friday, 9:00 AM - 5:00 PM.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached on (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.J./Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Oct 31, 2023
Application Filed
Feb 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12514149
SYSTEMS AND METHODS FOR SPRAYING SEEDS DISPENSED FROM A HIGH-SPEED PLANTER
2y 5m to grant Granted Jan 06, 2026
Patent 12480561
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month