Prosecution Insights
Last updated: April 19, 2026
Application No. 18/255,336

SYSTEMS AND METHODS FOR GENERATING VIRTUAL REALITY GUIDANCE

Final Rejection §103§112
Filed
May 31, 2023
Examiner
LE, SARAH
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Intuitive Surgical Operations, Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
172 granted / 258 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment Applicant's amendments and remarks filed 10/13/2025 have been entered and considered but are not found convincing. Claims 1,3-8, 20-17, 36-38 have been amended. Claims 2, 18-35 were cancelled. Claim 39 has been added. In summary, claims 1, 3-17, 36-39 are pending in this application. Applicant’s amendments have necessitated the new grounds of rejection set forth herein; according this action is made final. Response to Arguments Objection Applicant has amended claims 2-8, 10-17, 36-38 to overcome the objection. The objection of claims 2-8, 10-17, 36-38 have been withdrawn. Claim Rejections - 35 USC § 112 Applicant has amended claims 15-16 to overcome the rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. The rejection of claims 15-16 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph has been withdrawn. Claim Rejections - 35 USC § 103 Applicant's arguments with respect to independent claim 1 have been considered but are moot because the rejection has been modified to address the newly added limitations. The Examiner now relies on the new reference Fuerst and Garcia Kilroy. Regarding claim 4, Applicant has rewritten claim 4 in independent form. Applicant argues that submits that the cited portions of Azizian and Ryan, alone or in combination, fail to disclose or suggest at least "generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration, wherein the medical component is a robot- assisted manipulator assembly," as recited by claim 4. The Office Action on page 6 appears to analogize the instruments 156, 157, 158 of image 152 of Azizian to the "medical component" of claim 1. Then, when rejecting claim 4, the Office Action on page 15 appears to turn to paragraph [0003] of Ryan as allegedly teaching "wherein the medical component is a robot-assisted manipulator assembly." Cited paragraph [0003] of Ryan states (italics in Office Action, bold added): ….Applicant respectfully submits that paragraph [0006] of Ryan teaches away from "generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration, wherein the medical component is a robot-assisted manipulator assembly," as recited by claim 4. For example, Ryan provides augmented reality display device 104 as a superior alternative to "large console type navigation or robotic systems ... in which the display and cameras are located outside the sterile filed away from the surgeon." (Ryan, para. [0003].) Nowhere does Ryan disclose displaying virtual guidance "including a virtual image of the medical component disposed in a second configuration, wherein the medical component is a robot-assisted manipulator assembly." Rather, cited Fig. 7 of Ryan shows a virtual target 700 and a virtual tool 702. The virtual tool 702 is not analogous to "a robot- assisted manipulator assembly." For at least these reasons, the cited portions of Azizian and Ryan, alone or in combination, fail to disclose or suggest the features of independent claim 4. Accordingly, Applicant respectfully submits that claim 4 is patentable over Azizian and Ryan. Examiner respectfully disagrees. First, Examiner cited at least portions of both references AZIZIAN and Ryan for the limitations of claim 4 “wherein the medical component is a robot- assisted manipulator assembly”. That means more than paragraphs can cited for this limitations. Second, AZIZIAN is main reference and Ryan just suggest idea robot-assisted. AZIZIAN teaches wherein the medical component is a robot- assisted manipulator assembly at least in Fig. 1A, [0020][0022] The teleoperational assembly 12 supports and manipulates the medical instrument system 14 while the surgeon S views the surgical site through the console 16. An image of the surgical site can be obtained by the endoscopic imaging system 15, such as a stereoscopic endoscope, which can be manipulated by the teleoperational assembly 12 to orient the endoscope 15. The number of medical instrument systems 14 used at one time will generally depend on the diagnostic or surgical procedure and the space constraints within the operating room among other factors. The teleoperational assembly 12 may include a kinematic structure of one or more non-servo controlled links (e.g., one or more links that may be manually positioned and locked in place, generally referred to as a set-up structure) and a teleoperational manipulator. The teleoperational assembly 12 includes a plurality of motors that drive inputs on the medical instrument system 14..”, Further, Arizian also teaches at least [0027] FIG. IB is a perspective view of one embodiment of a teleoperational assembly 12 which may be referred to as a patient side cart. The patient side cart 12 shown provides for the manipulation of three surgical tools 30a, 30b, 30c (e.g., instrument systems 14) and an imaging device 28 (e.g., endoscopic imaging system 15), such as a stereoscopic endoscope used for the capture of images of the site of the procedure. The imaging device may transmit signals over a cable 56 to the control system 20”; also in paragraph [0038] “Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND IMAGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. where teleoperational assembly supports and manipulates the medical instrument system which is considered as robot-assisted manipulator assembly. Additional, Ryan teaches robot-assisted at least at [0003] “Current medical procedures are typically performed by a surgeon or medical professional with little or no assistance outside of the required tools to affect changes on the patient. For example, an orthopedic surgeon may have some measurement tools (e.g. rulers or similar) and cutting tools (e.g. saws or drills), but visual, audible and tactile inputs to the surgeon are not assisted. In other words, the surgeon sees nothing but what he or she is operating on, hears nothing but the normal communications from other participants in the operating room, and feels nothing outside of the normal feedback from grasping tools or other items of interest in the procedure. Alternatively, large console type navigation or robotic systems are utilized in which the display and cameras are located outside the sterile field away from the surgeon.” where robotic system can be used as option. Ryan just gives alternatively way to use robotic systems. Therefore, the combination of AZIZIAN and Ryan teaches "wherein the medical component is a robot-assisted manipulator assembly” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 1. Claims 1, 5-6, 10,14, 17, 36-37, 39 are rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) Regarding independent claim 1, AZIZIAN teaches a system ([0020] Referring to FIG.1 A of the drawings, a teleoperational medical system for use in, for example, medical procedures including diagnostic, therapeutic, or surgical procedures, is generally indicated by the reference numeral 10. As will be described, the teleoperational medical systems of this disclosure are under the teleoperational control of a surgeon. In alternative embodiments, a teleoperational medical system may be under the partial control of a computer programmed to perform the procedure or sub-procedure. In still other alternative embodiments, a fully automated medical system, under the full control of a computer”) comprising: a processor; and a memory having computer readable instructions stored thereon, the computer readable instructions, when executed by the processor, ([0023] The teleoperational medical system 10 also includes a control system 20. The control system 20 includes at least one memory 24 and at least one processor 22, and typically a plurality of processors, for effecting control between the medical instrument system 14, the operator input system 16, and other auxiliary systems 26 which may include, for example, imaging systems, audio systems, fluid delivery systems, display systems, illumination systems, steering control systems, irrigation systems, and/or suction systems. The control system 20 can be used to process the images of the surgical environment from the imaging system 15 for subsequent display to the surgeon S through the surgeon's console 16. The control system 20 also includes programmed instructions (e.g., a computer-readable medium storing the instructions) to implement some or all of the methods described in accordance with aspects disclosed herein”) cause the system to: receive an image of a medical environment ([0022],[0038] “At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those”); identify a medical component in the image of the medical environment, the medical component disposed in a first configuration ([0038] At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND F AGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. In some embodiments, the endoscopic image 152 may be distinguishable from the anatomic model image 154 by a border illustration, color differences (e.g., the endoscopic image in color, and the anatomic model image in grayscale), in brightness, or other distinguishing characteristic”); receive kinematic information about the medical component (see at least [0022]; [0038] At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND F AGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. In some embodiments, the endoscopic image 152 may be distinguishable from the anatomic model image 154 by a border illustration, color differences (e.g., the endoscopic image in color, and the anatomic model image in grayscale), in brightness, or other distinguishing characteristics”); and generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration [see at least [0038] At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND F AGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. In some embodiments, the endoscopic image 152 may be distinguishable from the anatomic model image 154 by a border illustration, color differences (e.g., the endoscopic image in color, and the anatomic model image in grayscale), in brightness, or other distinguishing characteristics”; [0041] Optionally, the display 160 includes an anatomic model image 164 generated from the anatomic model dataset. In this example, the endoscopic image 162 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 164. The display 160 also includes an instrument illustration 166 overlaid or superimposed on the anatomic model image 164. The instrument illustration 166 may be generated based on instrument pose and scaling known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. The display 160 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 162 alone. For example, the anatomic model image 164 illustrates additional organs and bones not visible on the endoscopic image 162, and the instrument illustration 166 provides information about instrument trajectories not visible on the endoscopic image. From the second vantage point, the surgeon is able to observe a more expansive anatomical area while still maintaining awareness of the direct view from the endoscope. The display of the anatomic model image 164 is optional because in some examples, the endoscopic image 162 alone, presenting the three-dimensionally mapped endoscopic image dataset from the second vantage point, may provide the surgeon with sufficient spatial perspective.) AZIZIAN is understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Ryan teaches receive kinematic information about the medical component (see at least [0079] Referring to FIG. 5, the one or more cameras 500, 506 of the sensor suites (400, 422, 210, and 306) and the one or more visual markers 502, 504 are used to visually track a distinct object (e.g., a surgical tool, a desired location within an anatomical object, etc.) and determine attitude and position relative to the user 106. In one embodiment, each of the one or more markers is distinct and different from each other visually. Standalone object recognition and machine vision technology can be used for marker recognition. Alternatively, the present invention also provides for assisted tracking using IMUs 408 on one or more objects of interest, including but not limited to the markers 502, 504. Please note that the one or more cameras 500, 506 can be remotely located from the user 106 and provide additional data for tracking and localization.”); generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration (see at least [0096] FIG. 7 depicts an alternate view of the MXUI previously shown in FIG. 6, wherein a virtual target 700 and a virtual tool 702 are presented to the user 106 for easy use in achieving the desired version and inclination. In this embodiment, further combinations of virtual reality are used to optimize the natural feeling experience for the user by having a virtual target 700 with actual tool 702 fully visible or a virtual tool (not shown) with virtual target fully visible. Other combinations of real and virtual imagery can optionally be provided. Presentation of data can be in readable form 704 or in the form of imagery including but not limited to 3D representations of tools or other guidance forms.”; [0107] FIG. 23 depicts an exemplary embodiment of a MXUI shown to the user 106 via the display device 104 during resection of the femoral neck of a hip replacement procedure with a virtual resection guide 2300. A sagittal saw 2302 is shown having a plurality of fiducials 2304 defining a marker, allows the pose of the sagittal, saw 2302 to be tracked. Resection of the femoral neck can be guided either by lining up the actual saw blade 2306 with the virtual resection guide 2300 in the case where the drill is not tracked or by lining up a virtual saw blade (not shown) with the virtual resection guide 2300 in the case where the saw 2302 is tracked. As with the tracked drill shown in FIG. 20, the angles of the saw 2302 may be displayed numerically if the saw 2302 is tracked. These angles could be displayed relative to the pelvic reference frame or the femoral reference frame.[0108] FIG. 24 depicts an exemplary embodiment of a MXUI shown to the user 106 via the display device 104 during positioning of the acetabular shell of a hip replacement procedure wherein a virtual target 2400 for the acetabular impactor assembly 1100 and a virtual shell 2402 are shown. Placement of the acetabular impactor assembly 1100 is guided by manipulating it to align with the virtual target 2400. The posterior/lateral quadrant of the shell portion of the virtual target may be displayed in a different color or otherwise visually differentiated from the rest of the shell 2402 to demarcate to the user 106 a target for safe placement of screws into the acetabulum. The numerical angle of the acetabular impactor and the depth of insertion relative to the reamed or un-reamed acetabulum are displayed numerically as virtual text 2404. A magnified stereoscopic image (not shown) similar to 2202 centered on the tip of the impactor may be displayed showing how the virtual shell interfaces with the acetabulum of the virtual pelvis 2102.” ” Where provide virtual guidance based on the posture and position of the medical component (tool), including the presentation of data in human-readable format (704) and a 3D representation of the tool. In addition, the virtual target (700) is a virtual image of the medical component and is arranged at a position different from that of the virtual tool (700), and corresponds to the arrangement of the second setting) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN with generating virtual guidance based on the posture and position of the medical component as seen in Ryan because this modification would assist in the performance of a medical procedure ([0005] of Ryan) Both AZIZIAN and Ryan are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Fuerst teaches receive kinematic information about the medical component ([0032] A surgical robotic kinematics processor 106 can generate kinematics parameters that define movement restraints and movement abilities of a physical surgical robotic arm. For example, each robotic arm can have various interconnected members, each of which can move rotate or travel in one or more defined directions, planes, or axis. Movements can be bound, for example, by angles or distances of travel. Thus, the kinetic parameters can define how a surgical robotic arm and surgical robotic tool can or cannot move, which in turn, determines a reach and movement capabilities of a tool in a workspace.”); receive an indicator of guidance type ([0048] In one embodiment, the VR simulation 100 can be communicated or output through a display, for example, on a user console 120. A view generator can generate one or more views 118 of the VR simulation, for example, a stadium view or first person view. Such view can be used by a user to further generate inputs to the optimizer 20. For example, a user can use the user console 120 to modify or replace the virtual patient, a workspace, virtual surgical equipment, and/or a virtual operating room. The VR simulation can be optimized at block 110 based on the adjustments made by the user. The view of the simulation can then be re-generated for the user, thus providing adjustments and optimization of the workflow based on user generated inputs. In this manner, the user can tailor the workflow and layout optimization to suit a particular patient and procedure type. This can streamline setup of robotic arms for different patients, different procedures, and help selection of operating rooms to facilitate such procedures.”); and generate virtual guidance based on the kinematic information and the indicator of guidance type, the virtual guidance including a virtual image of the medical component disposed in a second configuration ([0040] In one embodiment, the kinematics parameters can be generated that define movement constraints of a surgical robotic arm to be used in the procedure type. For example, the kinematics parameters can define a direction that a member of the robot moves, a maximum or minimum distance or angle that each member can travel, the mass of each member, the stiffness of each member, and/or speed and force in which members move. Thus, the arranging of the virtual surgical robotic arm or the virtual tool can further be based on the kinematics parameters of the virtual robotic arm so that the virtual surgical robotic arm can perform the surgical procedures type within the surgical workspace, with reference to the kinematics parameters. [0050] In one embodiment, as shown in FIG. 4, an optimization process 110 can optimize a room layout 152 and surgical robotic arm configuration 154, based on one or more inputs 138. The inputs can include changes to one or more of the following factors: the procedure type, the surgical workspace, the virtual patient, the tool position, and kinematics parameters.” [0052] Changes in a patient model, surgical tool position, robot kinematics, or surgical workspace can initiate an adjustment to the setup of the robotic arm. For example, if a size of a virtual patient changes, this can cause a change in a location of the workspace. The trocar position is adjusted to maximize the reach of a tool within the changed location of the workspace. The port location and position of the surgical robotic arm can be adjusted accordingly. It should be understood that ‘setup’, ‘configuration’, and ‘arrangement’ as discussed with relation to the robotic arm and tool can describe the manner in which members of the robotic arm and/or tool is positioned, either absolutely or relative to one another. Thus, the ‘setup’ or ‘configuration’ of the robotic arm can describe a particular pose of the robotic arm and a position of the tool which is attached to the robotic arm. [0053] In one embodiment, changes in the location or orientation of virtual surgical equipment (e.g., user console, scanning equipment, control tower) or staff personnel models can initiate a rearrangement of the virtual surgical equipment. Similarly, changes to a procedure type can initiate changes in the surgical workspace in a virtual patient which, in turn, can cause a change in default tool positions (e.g., a default trocar position), port locations, and surgical robotic arm positions. Similarly, changes in a virtual patient (for example, taller or heavier build) can initiate a change in the surgical workspace of the virtual patient, which can cause a ripple effect in tool position, port location, and surgical robotic arm configuration. Similarly, a change in platform position (e.g., height, angle or slope of table) can similarly initiate changes in the tool position, port location, and robotic arm configuration, given that the virtual patient is laid upon the platform during the virtual procedure [0054] Referring to FIG. 4, the optimization process 110 can generate an optimized a room layout 152 and surgical robotic arm configuration 154 supports tool positions to have adequate reach within a defined workspace. The optimization process can have a hierarchy of operations based on a ranked order of importance. Blocks 140, 142, 144, and 146 can be performed sequentially in the order shown.”; [0061] Referring to FIG. 5, one example is shown of an optimized room layout in plan view. The location of any of the virtual objects, such as but not limited to: virtual patient 226, surgical robotic platform 222, surgical robotic arms and attached tools 224, a virtual control tower 228, a user console 232, and other virtual surgical equipment 230 can be arranged in a virtual surgical environment 220. The arrangement of the virtual objects can be described with coordinates such as, but not limited to, an x, y, and/or z axis. The orientation can describe a direction of any of the objects.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN and Ryan with generating virtual room operating based on based on procedure type as seen in Fuerst because this modification would plan a surgical robotic workflow that describes an arrangement of surgical robotic arms in advance of a procedure ([0005] ) AZIZIAN, Ryan, Fuerst are understood to be silent on the remaining limitations of claim 1. In the same field of endeavor, Garcia Kilroy teaches receive an indicator of guidance type ([0021] In another variation, the virtual reality system may be used to test a control mode for a robotic surgical component. For example, a method for testing a control mode for a robotic surgical component may include generating a virtual robotic surgical environment, the virtual robotic surgical environment comprising at least one virtual robotic component corresponding to the robotic surgical component, emulating a control mode for the robotic surgical component in the virtual robotic surgical environment, and, in response to a user input to move the at least one virtual robotic component, moving the at least one virtual robotic component in accordance with the emulated control mode. In some variations, moving the virtual robotic component may include passing status information regarding the at least one virtual robotic component from a first application (e.g., virtual operating environment application) to a second application (e.g., kinematics application), generating an actuation command based on the status information and the emulated control mode, passing the actuation command from the second application to the first application, and moving the at least one virtual robotic component in the virtual robotic surgical environment based on the actuation command. “;[0051] Generally, a user U may don the head-mounted display 220 and carry (or wear) at least one handheld controller 230 while he or she moves around a physical workspace, such as a training room. While wearing the head-mounted display 220, the user may view an immersive first-person perspective view of the virtual robotic surgical environment generated by the virtual reality processor 210 and displayed onto the immersive display 222. As shown in FIG. 2B, the view displayed onto the immersive display 222 may include one or more graphical representations 230′ of the handheld controllers (e.g., virtual models of the handheld controllers, virtual models of human hands in place of handheld controllers or holding handheld controllers, etc.). A similar first-person perspective view may be displayed onto an external display 240 (e.g., for assistants, mentors, or other suitable persons to view). As the user moves and navigates within the workspace, the virtual reality processor 210 may change the view of the virtual robotic surgical environment displayed on the immersive display 222 based at least in part on the location and orientation of the head-mounted display (and hence the user's location and orientation), thereby allowing the user to feel as if he or she is exploring and moving within the virtual robotic surgical environment.”; [0060] For example, the kinematics application 420 may allow for a description or definition of one or more virtual control modes, such as for the virtual robotic arms or other suitable virtual components in the virtual environment. Generally, for example, a control mode for a virtual robotic arm may correspond to a function block that enables the virtual robotic arm to perform or carry out a particular task. For example, as shown in FIG. 4, a control system 430 may include multiple virtual control modes 432, 434, 436, etc. governing actuation of at least one joint in the virtual robotic arm. The virtual control modes 432, 434, 436, etc. may include at least one primitive mode (which governs the underlying behavior for actuation of at least one joint) and/or at least one user mode (which governs higher level, task-specific behavior and may utilize one or more primitive modes). In some variations, a user may activate a virtual touchpoint surface of a virtual robotic arm or other virtual object, thereby triggering a particular control mode (e.g., via a state machine or other controller). In some variations, a user may directly select a particular control mode through, for example, a menu displayed in the first-person perspective view of the virtual environment. [0061] Examples of primitive virtual control modes include, but are not limited to, a joint command mode (which allows a user to directly actuate a single virtual joint individually, and/or multiple virtual joints collectively), a gravity compensation mode (in which the virtual robotic arm holds itself in a particular pose, with particular position and orientation of the links and joints, without drifting downward due to simulated gravity), and trajectory following mode (in which the virtual robotic arm may move to follow a sequence of one or more Cartesian or other trajectory commands). Examples of user modes that incorporate one or more primitive control modes include, but are not limited to, an idling mode (in which the virtual robotic arm may rest in a current or default pose awaiting further commands), a setup mode (in which the virtual robotic arm may transition to a default setup pose or a predetermined template pose for a particular type of surgical procedure), and a docking mode (in which the robotic arm facilitates the process in which the user attaches the robotic arm to a part, such as with gravity compensation, etc.”; [0085] In some variations, one or more sensors 750 may be configured to detect status of at least one robotic component (e.g., a component of a robotic surgical system, such as a robotic arm, a tool driver coupled to a robotic arm, a patient operating table to which a robotic arm is attached, a control tower, etc.) or other component of a robotic surgical operating room. Such status may indicate, for example, position, orientation, speed, velocity, operative state (e.g., on or off, power level, mode), or any other suitable status of the component.);and generate virtual guidance based on the kinematic information and the indicator of guidance type, the virtual guidance including a virtual image of the medical component disposed in a second configuration( 0060] For example, the kinematics application 420 may allow for a description or definition of one or more virtual control modes, such as for the virtual robotic arms or other suitable virtual components in the virtual environment. Generally, for example, a control mode for a virtual robotic arm may correspond to a function block that enables the virtual robotic arm to perform or carry out a particular task. For example, as shown in FIG. 4, a control system 430 may include multiple virtual control modes 432, 434, 436, etc. governing actuation of at least one joint in the virtual robotic arm. The virtual control modes 432, 434, 436, etc. may include at least one primitive mode (which governs the underlying behavior for actuation of at least one joint) and/or at least one user mode (which governs higher level, task-specific behavior and may utilize one or more primitive modes). In some variations, a user may activate a virtual touchpoint surface of a virtual robotic arm or other virtual object, thereby triggering a particular control mode (e.g., via a state machine or other controller). In some variations, a user may directly select a particular control mode through, for example, a menu displayed in the first-person perspective view of the virtual environment. [0065] Another example for controlling a virtual robotic arm is trajectory following for a robotic arm. In trajectory following, movement of the robotic arm may be programmed then emulated using the virtual reality system. Accordingly, when the system is used to emulate a trajectory planning control mode, the actuation command generated by a kinematics application may include generating an actuated command for each of a plurality of virtual joints in the virtual robotic arm. This set of actuated commands may be implemented by a virtual operating environment application to move the virtual robotic arm in the virtual environment, thereby allowing testing for collision, volume or workspace of movement, etc”;. [0147] Generally, the virtual reality system may be used in any suitable scenario in which it is useful to simulate or replicate a robotic surgical environment. In some variations, the virtual reality system may be used for training purposes, such as allowing a surgeon to practice controlling a robotic surgical system and/or to practice performing a particular kind of minimally-invasive surgical procedure using a robotic surgical system. The virtual reality system may enable a user to better understand the movements of the robotic surgical system in response to user commands, both inside and outside the patient. For example, a user may don a head-mounted display under supervision of a mentor or trainer who may view the virtual reality environment alongside the user (e.g., through a second head-mounted display, through an external display, etc.) and guide the user through operations of a virtual robotic surgical system within the virtual reality environment. As another example, a user may don a head-mounted display and may view, as displayed on the immersive display (e.g., in a content window, the HUD, etc.) a training-related video such as a recording of a previously performed surgical procedure.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan and Fuerst with selecting an indicator of guidance type of Garcia Kilroy because this modification would allow a surgeon to practice controlling a robotic surgical system and/or to practice performing a particular kind of minimally-invasive surgical procedure using a robotic surgical system ([0147] of Garcia Kilroy) Thus, the combination of AZIZIAN, Ryan, Fuerst and Garcia Kilroy teaches a system comprising: a processor; and a memory having computer readable instructions stored thereon, the computer readable instructions, when executed by the processor, cause the system to: receive an image of a medical environment; identify a medical component in the image of the medical environment, the medical component disposed in a first configuration ;receive kinematic information about the medical component; receive an indicator of guidance type; and generate virtual guidance based on the kinematic information and the indicator of guidance type, the virtual guidance including a virtual image of the medical component disposed in a second configuration. Regarding claim 5, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein receiving the image includes receiving the image from a mobile device(see at least: [0022],[0038] of AZIZIAN “At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158.; [0102] of Ryan “FIG. 18 depicts an exemplary embodiment of a MXUI shown to the user 106 via the display device 104 (e.g., the AR headset 3600) showing the calibration assembly 1500 being used for various calibration steps. First, the hip impactor assembly 1100 can be screwed into the appropriate hole of the plate 1502 so that the shoulder 1206 is seated squarely without play against the surface of the plate 1502. The cameras 3904 of the AR headset 3600 can then capture images which processed by an algorithm to determine the relationship between the shoulder of the impactor on which the acetabular shell will seat and the marker 1104 of the hip impactor assembly 1100.”; [0113] FIG. 28 depicts a flowchart showing how the system 10 and its display device 104 (e.g., the AR headset 3600) can be used in conjunction with the C-arm 2700 in a surgical procedure. The camera 3904 (e.g., a high definition camera or the like) incorporated in the AR headset 3600 can be used to capture the image displayed on the C-arm monitor (2800)….”; [0041] of Fuerst “ In one embodiment, the VR simulation can include virtual equipment 115 such as a surgical robotic platform, a control tower, a user console, scanning equipment (e.g., mobile X-ray machine (C-arm) or ultrasound imaging machine), stands, stools, trays, and other equipment that would be arranged in a surgical operating room. The virtual equipment can be generated based on computer models that define parameters such as shape and size of the various equipment.) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 6, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein receiving the image includes receiving the image from a camera system mounted in the medical environment (see at least: [0025]; [0027],[0030],[0038] of AZIZIAN “Endoscopic imaging systems (e.g., systems 15, 28) may be provided in a variety of configurations including rigid or flexible endoscopes. Rigid endoscopes may include a rigid tube, housing a relay lens system, for transmitting an image from a distal end to a proximal end of the endoscope. Flexible endoscopes may transmit images using one or more flexible optical fibers. Digital image based endoscopes have a "chip on the tip" design in which a distal digital sensor such as a one or more charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) device acquire image data and acquired image data can be transferred over a wired or wireless interface. Endoscopic imaging systems may provide two- or three- dimensional images to the viewer. Two-dimensional images may provide limited depth perception. Three-dimensional stereo endoscopic images may provide the viewer with more accurate depth perception. Stereo endoscopic instruments employ stereo cameras to capture stereo images of the patient anatomy. An endoscopic instrument may be a fully sterilizable assembly with the endoscope cable, handle and shaft all rigidly coupled and hermetically sealed.”; [0070] of Ryan “Referring to FIGS. 1, 2A-B, and 3, a sensory augmentation system 10 of the present invention is provided for use in medical procedures. The system 10 includes one or more visual markers (100, 108, 110), a processing unit 102, a sensor suite 210 having one or more tracking camera(s) 206, and a display device 104 having a display generator 204 that generates a visual display on the display device 104 for viewing by the user 106. The display device 104 is attached to a user 106 such that the display device 104 can augment his visual input. In one preferred embodiment, the display device 104 is attached to the user's 106 head. Alternatively, the display device 104 is located separately from the user 106, while still augmenting the visual scene. In one embodiment, each of the markers (100, 108, and 110) is distinct and different from each other visually so they can be individually tracked by the camera(s) 206) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 10, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein receiving kinematic information about the medical component includes receiving sensor information from the medical component (see at least: [0038] of AZIZIAN; [0079] of Ryan “Referring to FIG. 5, the one or more cameras 500, 506 of the sensor suites (400, 422, 210, and 306) and the one or more visual markers 502, 504 are used to visually track a distinct object (e.g., a surgical tool, a desired location within an anatomical object, etc.) and determine attitude and position relative to the user 106. In one embodiment, each of the one or more markers is distinct and different from each other visually. Standalone object recognition and machine vision technology can be used for marker recognition. Alternatively, the present invention also provides for assisted tracking using IMUs 408 on one or more objects of interest, including but not limited to the markers 502, 504. Please note that the one or more cameras 500, 506 can be remotely located from the user 106 and provide additional data for tracking and localization”) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 14, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein the virtual image includes a virtual image of an auxiliary component (see at least: [0038] of AZIZIAN; [0094]; [0096] of Ryan “FIG. 7 depicts an alternate view of the MXUI previously shown in FIG. 6, wherein a virtual target 700 and a virtual tool 702 are presented to the user 106 for easy use in achieving the desired version and inclination. In this embodiment, further combinations of virtual reality are used to optimize the natural feeling experience for the user by having a virtual target 700 with actual tool 702 fully visible or a virtual tool (not shown) with virtual target fully visible. Other combinations of real and virtual imagery can optionally be provided. Presentation of data can be in readable form 704 or in the form of imagery including but not limited to 3D representations of tools or other guidance forms.”; [0061] of Fuerst Referring to FIG. 5, one example is shown of an optimized room layout in plan view. The location of any of the virtual objects, such as but not limited to: virtual patient 226, surgical robotic platform 222, surgical robotic arms and attached tools 224, a virtual control tower 228, a user console 232, and other virtual surgical equipment 230 can be arranged in a virtual surgical environment 220. The arrangement of the virtual objects can be described with coordinates such as, but not limited to, an x, y, and/or z axis. The orientation can describe a direction of any of the objects.) In addition ,the same motivation is used as the rejection for claim 1. Regarding claim 17, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein displaying the virtual image includes overlaying the virtual image on an image of the medical component in the first configuration (see at least: [0038] of AZIZIAN; [0094]-[0096] of Ryan “FIG. 7 depicts an alternate view of the MXUI previously shown in FIG. 6, wherein a virtual target 700 and a virtual tool 702 are presented to the user 106 for easy use in achieving the desired version and inclination. In this embodiment, further combinations of virtual reality are used to optimize the natural feeling experience for the user by having a virtual target 700 with actual tool 702 fully visible or a virtual tool (not shown) with virtual target fully visible. Other combinations of real and virtual imagery can optionally be provided. Presentation of data can be in readable form 704 or in the form of imagery including but not limited to 3D representations of tools or other guidance forms.” where real medical component (tool 702) in a first setting (current position) and a virtual image (virtual target 700)). In addition, the same motivation is used as the rejection for claim 1. Regarding claim 36, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, further comprising a display system configured to display the virtual image (see at least: [0038] of AZIZIAN “At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset.”; [0070] of Ryan “Referring to FIGS. 1, 2A-B, and 3, a sensory augmentation system 10 of the present invention is provided for use in medical procedures. The system 10 includes one or more visual markers (100, 108, 110), a processing unit 102, a sensor suite 210 having one or more tracking camera(s) 206, and a display device 104 having a display generator 204 that generates a visual display on the display device 104 for viewing by the user 106. The display device 104 is attached to a user 106 such that the display device 104 can augment his visual input. In one preferred embodiment, the display device 104 is attached to the user's 106 head. Alternatively, the display device 104 is located separately from the user 106, while still augmenting the visual scene. In one embodiment, each of the markers (100, 108, and 110) is distinct and different from each other visually so they can be individually tracked by the camera(s) 206.”;Fig. 5 of Fuerst; 5B of Garcia Kilroy) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 37, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, further comprising a robot-assisted manipulator assembly configured for operating a medical instrument in the medical environment (see at least: [0022], [0027],[0038] of AZIZIAN; [0003]; [0102] of Ryan ; [0037] of Fuerst “Thus, the kinematics modeling can benefit from physical feedback as well as computer-generated restraints to generate a kinematics model that resembles real-life kinematics of the robotic arm. Beneficially, by using real data from a physical robotic arm, the kinematics parameters can provide accuracy in determining the arrangement of the virtual robotic arms relative to the workspace and virtual patient. Accordingly, based on the movement constraints and capabilities of a physical surgical robotic arm and/or tool, the virtual robotic arm can be arranged relative to the virtual patient in the VR simulation to enable proper movements within the workspace of the virtual patient, as required by a particular surgical procedure. [0059], [0084-0085] of Garcia Kilroy) In addition, the same motivation is used as the rejection for claim 1. Regarding claim 39, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein the indicator of guidance type is an indicator of a mode of operation of the medical component (see at least: [0038] of Fuerst “In one embodiment, an optimization procedure 110 is performed to arrange the virtual objects (e.g., the patient, the robotic arm and tool, the platform, the user console, the control tower, and other surgical equipment) based on one or more inputs and changes to inputs. For example, inputs can workspace, patient geometry, operating room, procedure type, platform height, etc. Changes to inputs can include a modification to a virtual patient, a change to a procedure, a change to the operating room, a change to platform height, and/or any change that would modify a location or size of a workspace in a virtual patient. In some cases, a user interface can allow a user to manually move a virtual object in a VR simulation while wearing a headset having a head-up display. The user can be immersed in the VR simulation. Based on one or more inputs, the optimization procedure can rearrange the virtual objects and re-optimize.”; [0021] of Garcia Kilroy In another variation, the virtual reality system may be used to test a control mode for a robotic surgical component. For example, a method for testing a control mode for a robotic surgical component may include generating a virtual robotic surgical environment, the virtual robotic surgical environment comprising at least one virtual robotic component corresponding to the robotic surgical component, emulating a control mode for the robotic surgical component in the virtual robotic surgical environment, and, in response to a user input to move the at least one virtual robotic component, moving the at least one virtual robotic component in accordance with the emulated control mode. In some variations, moving the virtual robotic component may include passing status information regarding the at least one virtual robotic component from a first application (e.g., virtual operating environment application) to a second application (e.g., kinematics application), generating an actuation command based on the status information and the emulated control mode, passing the actuation command from the second application to the first application, and moving the at least one virtual robotic component in the virtual robotic surgical environment based on the actuation command. “;[0051] Generally, a user U may don the head-mounted display 220 and carry (or wear) at least one handheld controller 230 while he or she moves around a physical workspace, such as a training room. While wearing the head-mounted display 220, the user may view an immersive first-person perspective view of the virtual robotic surgical environment generated by the virtual reality processor 210 and displayed onto the immersive display 222. As shown in FIG. 2B, the view displayed onto the immersive display 222 may include one or more graphical representations 230′ of the handheld controllers (e.g., virtual models of the handheld controllers, virtual models of human hands in place of handheld controllers or holding handheld controllers, etc.). A similar first-person perspective view may be displayed onto an external display 240 (e.g., for assistants, mentors, or other suitable persons to view). As the user moves and navigates within the workspace, the virtual reality processor 210 may change the view of the virtual robotic surgical environment displayed on the immersive display 222 based at least in part on the location and orientation of the head-mounted display (and hence the user's location and orientation), thereby allowing the user to feel as if he or she is exploring and moving within the virtual robotic surgical environment.”; . [0061] Examples of primitive virtual control modes include, but are not limited to, a joint command mode (which allows a user to directly actuate a single virtual joint individually, and/or multiple virtual joints collectively), a gravity compensation mode (in which the virtual robotic arm holds itself in a particular pose, with particular position and orientation of the links and joints, without drifting downward due to simulated gravity), and trajectory following mode (in which the virtual robotic arm may move to follow a sequence of one or more Cartesian or other trajectory commands). Examples of user modes that incorporate one or more primitive control modes include, but are not limited to, an idling mode (in which the virtual robotic arm may rest in a current or default pose awaiting further commands), a setup mode (in which the virtual robotic arm may transition to a default setup pose or a predetermined template pose for a particular type of surgical procedure), and a docking mode (in which the robotic arm facilitates the process in which the user attaches the robotic arm to a part, such as with gravity compensation, etc.”) In addition, the same motivation is used as the rejection for claim 1. 2. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) further in view of further in view of DONHOWE et al, IDS, WO-2014106253 (“DONHOWE”) Regrading claim 3, AZIZIAN, Ryan, Fuerst and Garcia Kilroy he system of claim 1, wherein the computer readable instructions, when executed by the processor, further cause the system to: provide an implementation compared to the virtual guidance ([0079] of Fuerst “ At operation 278, a compatibility determination can be performed, describing how compatible a physical OR is for a surgical robotic system and/or procedure. The determination can show how much capacity the OR has to support the virtual surgical robotic system. In one embodiment, the determination can be based on whether the virtual equipment can be reliably moved in and out of the virtual OR, which, in turn, reflects whether the physical equipment can reliably be moved in and out of the physical OR. Compatibility score also be based on comparing the sizes of virtual equipment (e.g., the virtual robotic arm, platform, user console, and control tower), with entrances or pathways of the virtual OR.”; [0092] of Garcia Kilroy As yet another example, the one or more sensors in the robotic surgical environment may be used to compare an actual surgical procedure (occurring in the non-virtual robotic surgical environment) with a planned surgical procedure as planned in a virtual robotic surgical environment. For example, an expected position of at least one robotic component (e.g., robotic arm) may be determined during surgical preplanning, as visualized as a corresponding virtual robotic component in a virtual robotic surgical environment. During an actual surgical procedure, one or more sensors may provide information about a measured position of the actual robotic component. Any differences between the expected and measured position of the robotic component may indicate deviations from a surgical plan that was constructed in the virtual reality environment. Since such deviations may eventually result in undesired consequences (e.g., unintended collisions between robotic arms, etc.), identification of deviations may allow the user to adjust the surgical plan accordingly (e.g., reconfigure approach to a surgical site, change surgical instruments, etc.).) AZIZIAN, Ryan, Fuerst and Garcia Kilroy are understood to be silent on the remaining limitations of claim 3. In the same field of endeavor, DONHOWE teaches wherein the computer readable instructions, when executed by the processor, further cause the system to: provide an evaluation of an implementation compared to the virtual guidance (see at least page 16, least paragraph- page 17, first and second paragraph “At 408, a planned deployment location for the interventional instrument is located. The planned deployment location may be marked on the model of the plurality of passageways. The planned deployment location can be selected based upon the instrument operational capability information, the target structure information, the patient anatomy information, or a combination of the types of information. The selected deployment location may be at a point in an anatomic passageway nearest to the target structure. However, in many patients a nearest point deployment location may be impossible for the distal end of the interventional instrument to reach because the instrument has insufficient bend capability within the size and elasticity constraints of the selected anatomic passageway. A more suitable deployment location may be at a point on an anatomic passageway wall where the interventional instrument has an approach angle to the passageway wall that is within the bending capability of the instrument. For example, if the interventional instrument has an inflexible distal end that permits little or no bending, a suitable deployment location may be at a carina near the target structure. At the carina the interventional instrument may be deployed at an approximately 90° approach angle to the passageway wall with minimal bending of the distal end of the instrument. As another example, the navigation planning module may select a deployment location such that the approach angle is between approximately 30° and 90°. When selecting a deployment location, the planning system also confirms that the interventional tool is capable of extending from the catheter a sufficient distance to reach the target structure to perform the interventional procedure. As described, the planned deployment location may be located based on the analysis of the instrument operational capability, the target structure, and the patient anatomy. Alternatively or in combination with the system assessment, the planned deployment location may be identified by a clinician and communicated to the navigation planning module to locate or mark the clinician-identified planned deployment location in the model. When the navigation planning module receives the clinician-identified planned deployment location, the module may compare it with the system-identified deployment location. A visual or audible feedback cue may be issued if the clinician-identified deployment location is objectionable (e.g., "The chosen biopsy needle is not long enough to reach the target from this deployment location.") Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the method of using a mixed reality surgical navigation system for a medical procedure of AZIZIAN, Ryan, Fuerst and Garcia Kilroy with comparing different the planned placement positions as seen in DONHOWE because this modification would provide appropriate visual or audible feedback (see page 17, second paragraph of DONHOWE). Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy and DONHOWE teach wherein the computer readable instructions, when executed by the processor, further cause the system to: provide an evaluation of an implementation compared to the virtual guidance. 3. Claims 7-9, 38 are rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) further in view of DUINDAM et al, IDS, WO-2018129532 (“DUINDAM”) Regarding claim 7, AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein the image has an image frame of reference and the medical component has a component frame of reference and wherein the computer readable instructions, when executed by the processor, further cause the system to register the image frame of reference to the component frame of reference (see [0043] of AZIZIAN “Optionally, the display 170 includes an anatomic model image 174 generated from the anatomic model dataset. In this example, the endoscopic image 172 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 174. The display 170 also includes an illustration 176 of the instrument 156 overlaid or superimposed on the anatomic model image 174. The illustrated instrument 176 may be generated based on pose and/or poses of instrument 156 known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. The display 170 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 172 alone. For example, the anatomic model image 174 illustrates additional organs and bones not visible on the endoscopic image 172, and the instrument illustration 176 provides information about instrument trajectories not visible on the endoscopic image. Optionally, the display 170 includes an illustration 178 of the surgical environment external of the patient anatomy. In this example, display 170 includes the instrument illustration 176 which shows the instrument extending through an image of the patient's anatomic wall 180. The image of the patient's anatomic wall may be part of the anatomic model image 174. External to the patient's anatomic wall 174 in the external environment illustration 178 is an illustration 182 of the proximal end of the instrument 156 extending outside of the patient anatomy. The proximal end of the instrument 156 is coupled to a teleoperational manipulator arm 184 (e.g., manipulator arm 51). The external environment illustration 178 may be generated based on instrument and manipulator arm positions known or determined from kinematic chain, position or shape sensor information, visual tracking based on a camera external to the patient anatomy or a combination of those. For example, with the teleoperational manipulator, the patient, the endoscopic system, and the anatomic model all registered to a common surgical coordinate system, a composite virtual image of the surgical environment beyond the vantage point endoscopic image can be generated based on the anatomic model dataset and known kinematic and/or structural relationships of the components of the teleoperational system. The display 170 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 172 alone. From the third vantage point of FIG. 5, the surgeon is able to observe a more expansive intra- and extra-anatomical area while still maintaining awareness of the view from the distal end of the endoscope.”; [0092] of Ryan; [0062] of Garcia Kilroy ) In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst and Garcia Kilroy are understood to be silent on the remaining limitations of claim 7. In the same field of endeavor, DUINDAM teaches wherein the image has an image frame of reference and the medical component has a component frame of reference and wherein the computer readable instructions, when executed by the processor, further cause the system to register the image frame of reference to the component frame of reference (see at least [0099] as shown in Fig. 9 “ Referring back to operation 912 of FIG. 9, the computing system performing segmentation identifies these fiducial features (e.g., the axial support structure 1006, the control ring 1008, the tip ring 1010, the tool lumen 101 1, the medical tool 1012, the shape sensor lumen 1014, the shape sensor 1016) in the intraoperative image data to identify one or more portions of the medical instrument and uses the position, orientation, and/or pose of the medical instrument determined from the shape data obtained above to register the instrument reference frame to the intraoperative image reference frame. The registration may rotate, translate, or otherwise manipulate by rigid or non-rigid transforms points associated with the segmented shape and points associated with the sensed shape data. This registration between the instrument and intraoperative image frames of reference may be achieved, for example, by using an ICP technique or another point cloud registration technique. Alternatively, registration may be performed by matching and registering feature points within instrument and image point clouds where point correspondences are determined from shape similarity in some feature space. In some embodiments, the segmented shape of the medical instrument is registered to the shape data in the shape sensor frame and the associated transform (a vector applied to each of the points in the segmented shape to align with the shape data in the instrument reference frame) may then be applied to the entire three-dimensional image and/or to subsequently obtained three- dimensional images during the medical procedure. The transform may be a 6DOF transform, such that the shape data may be translated or rotated in any or all of X, Y, and Z and pitch, roll, and yaw.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst and Garcia Kilroy with registering the instrument reference frame to the image reference frame as seen in DUINDAM because this modification would use registered real-time images and prior-time anatomic images during an image-guided procedure ([0002] of DUINAM) Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy and DUINAM teach wherein the image has an image frame of reference and the medical component has a component frame of reference and wherein the computer readable instructions, when executed by the processor, further cause the system to register the image frame of reference to the component frame of reference. Regarding claim 8, AZIZIAN, Ryan, Fuerst, Garcia Kilroy and DUINAM teach the system of claim 7, wherein registering the image frame of reference to the component frame of reference includes identifying a fiducial portion of the medical component in both the image frame of reference and the component frame of reference (see at least : [0043] of AZIZIAN “.. For example, with the teleoperational manipulator, the patient, the endoscopic system, and the anatomic model all registered to a common surgical coordinate system, a composite virtual image of the surgical environment beyond the vantage point endoscopic image can be generated based on the anatomic model dataset and known kinematic and/or structural relationships of the components of the teleoperational system. The display 170 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 172 alone. From the third vantage point of FIG. 5, the surgeon is able to observe a more expansive intra- and extra-anatomical area while still maintaining awareness of the view from the distal end of the endoscope.”; [0092],[0105], [018] of Ryan; [0062] of Garcia Kilroy ; [0099] as shown in Fig. 9 of DUINDAM “ Referring back to operation 912 of FIG. 9, the computing system performing segmentation identifies these fiducial features (e.g., the axial support structure 1006, the control ring 1008, the tip ring 1010, the tool lumen 101 1, the medical tool 1012, the shape sensor lumen 1014, the shape sensor 1016) in the intraoperative image data to identify one or more portions of the medical instrument and uses the position, orientation, and/or pose of the medical instrument determined from the shape data obtained above to register the instrument reference frame to the intraoperative image reference frame. The registration may rotate, translate, or otherwise manipulate by rigid or non-rigid transforms points associated with the segmented shape and points associated with the sensed shape data. This registration between the instrument and intraoperative image frames of reference may be achieved, for example, by using an ICP technique or another point cloud registration technique. Alternatively, registration may be performed by matching and registering feature points within instrument and image point clouds where point correspondences are determined from shape similarity in some feature space. In some embodiments, the segmented shape of the medical instrument is registered to the shape data in the shape sensor frame and the associated transform (a vector applied to each of the points in the segmented shape to align with the shape data in the instrument reference frame) may then be applied to the entire three-dimensional image and/or to subsequently obtained three- dimensional images during the medical procedure. The transform may be a 6DOF transform, such that the shape data may be translated or rotated in any or all of X, Y, and Z and pitch, roll, and yaw.” ) In addition, the same motivation is used as the rejection for claim 7. Regarding claim 9, AZIZIAN, Ryan, Fuerst, Garcia Kilroy and DUINAM teach he system of claim 7, wherein the computer readable instructions, when executed by the processor, further cause the system to display the virtual image in the image frame of reference (see at least: [0043] of AZIZIAN; [0094], [0105] [0108] of Ryan “In one exemplary embodiment of the present invention and referring to FIG. 6, the system 10 is used for hip replacement surgery wherein a first marker 600 is attached via a fixture 602 to a pelvis 604 and a second marker 606 is attached to an impactor 608. The user 106 can see the mixed reality user interface image (“MXUI”) shown in FIG. 6 via the display device 104. The MXUI provides stereoscopic virtual images of the pelvis 604 and the impactor 604 in the user's field of view during the hip replacement procedure ;; [0112] of DUINDAM “ Referring to operation 918, optionally, the computing system may display a representation of the preoperative model as updated in operation 916. FIG. 1 1 depicts a composite image 1 100 that includes a representation of the preoperative model according to examples of the present disclosure displayed on a display 1 10 such as that of FIG. 1. Image 1 100 may be substantially similar to image 800 in many regards. The composite image 1 100 includes a surface model 1 102 of a passageway (e.g., a bronchial passage) in which the medical instalment is located generated from the preoperative model. An internal perspective may be provided to the operator O to facilitate image guided medical procedures. The internal perspective presents a view of the model 1102 from the perspective of the distal tip of the medical instrument. For reference, a rendering 1104 of the medical instrument may be displayed. If a medical tool has been extended through the medical instrument, a rendering 1106 of the medical tool may be displayed. [0113] The composite image 1100 further includes an image of the target 1108. The target 1108 may be rendered as an opaque object, while other tissues are rendered to be semi- transparent, such that the tumor can been seen through other tissues. For example, when the target 1108 is not co-located with a wall of the model 1102, the model 1102 may be rendered semi-transparently to permit a perspective view of the target 1108. The computing system may calculate a position and orientation of the distal tip of the instrument model 1104 and may display a trajectory vector 1110 extending from the distal tip of the instrument model 1104 to the target 1108. Other information and elements may be presented in the display 110 to the operator O, such a physiological information or control elements. For example, the display 110 may include the image 600 of FIG. 6 in a window 1112.[0114] Optionally, the operator O and/or teleoperational system may use the updated preoperative model to perform an image-guided medical procedure. For example, the operator O or teleoperational system may navigate the medical instrument within the patient anatomy by steering the medical instrument based on the image 1100. Because the medical instrument is registered to the image 1100, movement of the medical instrument with respect to the patient P can be visualized by displaying corresponding movements of the displayed medical instrument 1104 within the patient anatomy represented in the image 1100. Once the medical instrument is positioned near the target, the operator O may advance a medical tool, such as the medical tool 226 of FIG. 2, through the medical instrument. The operator O may use the medical tool to perform a procedure such as surgery, biopsy, ablation, illumination, irrigation, or suction on the target, and may visualize the movement and operation of the medical tool using the rendering 1106 thereof during the procedure.”) In addition, the same motivation is used as the rejection for claim 7. Regarding claim 38, AZIZIAN, Ryan, Fuerst, Garcia Kilroy teach the system of claim 37, wherein the image has an image frame of reference registered to a manipulator frame of reference of the robot-assisted manipulator assembly (see [0043] of AZIZIAN “Optionally, the display 170 includes an anatomic model image 174 generated from the anatomic model dataset. In this example, the endoscopic image 172 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 174. The display 170 also includes an illustration 176 of the instrument 156 overlaid or superimposed on the anatomic model image 174. The illustrated instrument 176 may be generated based on pose and/or poses of instrument 156 known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. The display 170 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 172 alone. For example, the anatomic model image 174 illustrates additional organs and bones not visible on the endoscopic image 172, and the instrument illustration 176 provides information about instrument trajectories not visible on the endoscopic image. Optionally, the display 170 includes an illustration 178 of the surgical environment external of the patient anatomy. In this example, display 170 includes the instrument illustration 176 which shows the instrument extending through an image of the patient's anatomic wall 180. The image of the patient's anatomic wall may be part of the anatomic model image 174. External to the patient's anatomic wall 174 in the external environment illustration 178 is an illustration 182 of the proximal end of the instrument 156 extending outside of the patient anatomy. The proximal end of the instrument 156 is coupled to a teleoperational manipulator arm 184 (e.g., manipulator arm 51). The external environment illustration 178 may be generated based on instrument and manipulator arm positions known or determined from kinematic chain, position or shape sensor information, visual tracking based on a camera external to the patient anatomy or a combination of those. For example, with the teleoperational manipulator, the patient, the endoscopic system, and the anatomic model all registered to a common surgical coordinate system, a composite virtual image of the surgical environment beyond the vantage point endoscopic image can be generated based on the anatomic model dataset and known kinematic and/or structural relationships of the components of the teleoperational system. The display 170 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 172 alone. From the third vantage point of FIG. 5, the surgeon is able to observe a more expansive intra- and extra-anatomical area while still maintaining awareness of the view from the distal end of the endoscope.”; [0094], [0105] [0108] of Ryan; [0025] of Fuers “A surgical robotic arm can have movable, jointed, and/or motorized members with multiple degrees of freedom that can hold various tools or appendages at distal ends. Example systems include the da Vinci® Surgical System which can be used for minimally invasive surgery (e.g., urologic surgical procedures, general laparoscopic surgical procedures, gynecologic laparoscopic surgical procedures, general non-cardiovascular thoracoscopic surgical procedures and thoracoscopically assisted cardiotomy procedures). A “virtual surgical robotic arm” can be a computer generated representation of a robotic arm rendered over the captured video of a user setup. The virtual surgical robotic arm can be a complex 3D model of the real robotic arm. Alternatively or additionally a virtual surgical robotic arm can include visual aids such as arrows, tool tips, or other representation relating to providing pose information about a robotic arm such as a geometrically simplified version of the real robotic arm.” [0034] Alternatively or additionally, the kinematics processor can generate the kinematic parameters based on robot control data (e.g. motor/actuator commands and feedback, servo commands, position data, and speed data) generated by robot controls 112. The robot control data can be generated during an actual surgical procedure using a real surgical robotic arm and/or surgical tool 108, or a simulated surgical procedure (e.g., a test run), also using a real surgical robotic arm and/or tool.”) In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst, Garcia Kilroy are understood to be silent on the remaining limitations of claim 38. In the same field of endeavor, DUINDAM teaches wherein the image has an image frame of reference registered to a manipulator frame of reference of the robot-assisted manipulator assembly (see at least[0099] as shown in Fig. 9 of DUINDAM “ Referring back to operation 912 of FIG. 9, the computing system performing segmentation identifies these fiducial features (e.g., the axial support structure 1006, the control ring 1008, the tip ring 1010, the tool lumen 101 1, the medical tool 1012, the shape sensor lumen 1014, the shape sensor 1016) in the intraoperative image data to identify one or more portions of the medical instrument and uses the position, orientation, and/or pose of the medical instrument determined from the shape data obtained above to register the instrument reference frame to the intraoperative image reference frame. The registration may rotate, translate, or otherwise manipulate by rigid or non-rigid transforms points associated with the segmented shape and points associated with the sensed shape data. This registration between the instrument and intraoperative image frames of reference may be achieved, for example, by using an ICP technique or another point cloud registration technique. Alternatively, registration may be performed by matching and registering feature points within instrument and image point clouds where point correspondences are determined from shape similarity in some feature space. In some embodiments, the segmented shape of the medical instrument is registered to the shape data in the shape sensor frame and the associated transform (a vector applied to each of the points in the segmented shape to align with the shape data in the instrument reference frame) may then be applied to the entire three-dimensional image and/or to subsequently obtained three- dimensional images during the medical procedure. The transform may be a 6DOF transform, such that the shape data may be translated or rotated in any or all of X, Y, and Z and pitch, roll, and yaw.” …[0104] Referring next to operation 914, the preoperative image reference frame of the preoperative model is registered to the intraoperative image reference frame. As described above, the computing system may register the preoperative image reference frame to the instrument reference frame in operation 908 and register the instrument reference frame to the intraoperative image reference frame in operation 912. Accordingly, registration of the preoperative image reference frame to the intraoperative image reference frame may be performed using the common frame of reference i.e., the instrument reference frame”). In addition, the same motivation is used as the same rejection for claim 7. Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy and DUINAM teaches wherein the image has an image frame of reference registered to a manipulator frame of reference of the robot-assisted manipulator assembly. 4. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) further in view of further in view of KOENIG et al, Patent Application Publication No.2018/0079090 (“KOENIG”) further in view of White et al, U.S Patent Application Publication No.2018/0082480 (“White”) Regarding claim 11, A AZIZIAN, Ryan, Fuerst and Garcia Kilroy teach the system of claim 1, wherein the second configuration and the virtual image includes the medical component being arranged in the configuration (see at least [0022];[0027] of AZIZIAN FIG. IB is a perspective view of one embodiment of a teleoperational assembly 12 which may be referred to as a patient side cart. The patient side cart 12 shown provides for the manipulation of three surgical tools 30a, 30b, 30c (e.g., instrument systems 14) and an imaging device 28 (e.g., endoscopic imaging system 15), such as a stereoscopic endoscope used for the capture of images of the site of the procedure. The imaging device may transmit signals over a cable 56 to the control system 20. Manipulation is provided by teleoperative mechanisms having a number of joints. The imaging device 28 and the surgical tools 30a-c can be positioned and manipulated through incisions in the patient or through a natural orifice (e.g., oral cavity) so that a kinematic remote center is maintained at the entry to minimize the size of the incision or to avoid damage to the natural orifice boundaries. Images of the surgical environment within the patient anatomy can include images of the distal ends of the surgical tools 30a-c when they are positioned within the field-of-view of the imaging device 28. [0028] The patient side cart 22 includes a drivable base 58. The drivable base 58 is connected to a telescoping column 57, which allows for adjustment of the height of the arms 54. The arms 54 may include a rotating j oint 55 that both rotates and moves up and down. Each of the arms 54 may be connected to an orienting platform 53. The orienting platform 53 may be capable of 360 degrees of rotation. The patient side cart 22 may also include a telescoping horizontal cantilever 52 for moving the orienting platform 53 in a horizontal direction. [0038] ; [0096] of Ryan “FIG. 7 depicts an alternate view of the MXUI previously shown in FIG. 6, wherein a virtual target 700 and a virtual tool 702 are presented to the user 106 for easy use in achieving the desired version and inclination. In this embodiment, further combinations of virtual reality are used to optimize the natural feeling experience for the user by having a virtual target 700 with actual tool 702 fully visible or a virtual tool (not shown) with virtual target fully visible. Other combinations of real and virtual imagery can optionally be provided. Presentation of data can be in readable form 704 or in the form of imagery including but not limited to 3D representations of tools or other guidance forms”; Fig. 5 of Fuerst, Fig. 5B of Garcia Kilroy) In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst and Garcia Kilroy are understood to be silent on the remaining limitations of claim 11. In the same field of endeavor, KOENIG teaches wherein the second configuration is a stowage configuration (see at least [0066] The various links in the robotic arm may be arranged in any number of predetermined configurations for different purposes. For instance, a robotic arm (e.g., a variation with offset axes for spherical roll, spherical pitch, and instrument rotation, as described above with reference to FIG. 1F) may be arranged in a compact, folded configuration, such as for stowage under a surgical table, storage, and/or transport. The folded arm configuration may also incorporate the folding, retraction, or other compact storage of components coupled to the robotic arm, such as a table adapter coupling the robotic arm to a surgical patient table, cart, or other surface ..”); [0067] FIGS. 7A and 7B illustrate exemplary variations of robotic arms (similar to robotic arm 500 described above with reference to FIGS. 5A-5C) arranged in an exemplary folded configuration underneath a surgical patient table. …In some variations, for example, the stowage configuration of an arm shown in FIGS. 7A and 7B may occupy a volume of generally between about 8 and about 12 inches high (along the vertical height of the table), between about 8 and about 12 inches wide (along the width of the table), and between about 18 and 22 inches long (along the longitudinal length of the table). In one exemplary variations, for example, the stowage configuration of an arm may occupy a volume of about 10 inches high, about 10 inches wide, and about 20 inches long.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst and Garcia Kilroy with applying stowable configuration as seen in KOENIG because this modification would arrange in a compact, folded configuration ([0066]) AZIZIAN, Ryan, Fuerst, Garcia Kilroy and KOENIG are understood to be silent on the remaining limitations of claim 11. In the same field of endeavor, White teaches the virtual image includes a virtual animation of the medical component being arranged in the stowage configuration (see at least abstract “A system and method for using augmented reality device for use during a surgical procedure are described. A system may include an augmented reality device to present a virtual indication, such as a virtual indication of a surgical instrument, a force vector, a direction, or the like. The augmented reality device may present a virtual aspect of a procedure, such as a virtual animation of a step of a procedure. The augmented reality device may present a virtual object, indication, aspect, etc., within a surgical field while permitting the surgical field or aspects of the surgical field to be viewed through the augmented reality display (e.g., presenting virtual objects mixed with real objects).”; [0042] The contents of some cases may require assembly before a surgical procedure begins. The operator may be prompted on the AR display with instructions for opening the case and locating pieces for the item to be assembled. The AR display may present three-dimensional representations of the pieces or the location of the pieces, such displaying a three-dimensional representation of the piece above where the piece is physically located. Through object recognition or scanning identification tags on a piece (e.g., a bar code, a QR code), the AR device may determine that all necessary pieces have been located. The AR display may present instructions for assembling the item. The instruction may include a list of textual steps, animations, video, or the like. The instructions may be animated steps using the three-dimensional representations of the pieces. When the operator completes the assembly, the AR display may provide a three-dimensional representation of the assembled item such that the operator may confirm the physical assembled item resembles the representation. The operator may then be provided instructions on the AR display for the placement of the assembled item with the surgical procedure room.”; [0046]; [0051] “0051] FIGS. 4A-4C illustrate augmented reality surgical instrument assembly displays 400A-400C in accordance with some embodiments. The AR displays 400A-400C may be used to present virtual representations of portions of a surgical instrument for assembly instructions. For example, FIG. 4A illustrates a series of virtual representations. For example, a first virtual representation 402 illustrates instruments to be assembled. A second virtual representation 404 illustrates a first action to assemble one of the instruments. A third virtual representation 406 illustrates a second action to assemble another of the instruments. The AR display 400A may present each of the virtual representations 402-406, for example in order, such that the corresponding surgical instruments may be assembled. In an example the virtual representation 406 may be displayed after a detection device identifies that an assembly operation corresponding to the virtual representation 404 has been completed (e.g., by a user viewing the AR display 400A). The virtual representations 402-406 may proceed automatically as corresponding assembly is completed for a real surgical instrument, or a surgeon may cause (e.g., by selecting a virtual user indication or performing a gesture) a red light or green light (or other visual indicator) of whether to proceed, to be displayed. The AR displays 400B and 400C illustrate other techniques that may be illustrated using virtual representations to allow a user to assemble an instrument. Other techniques or assembly of instruments may be displayed using virtual representations. The AR displays 400A-400C may include animation or video to illustrate assembly instructions.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst and Garcia Kilroy and applying stowable configuration of KOENIG with presenting virtual representations of portions of a surgical instrument for assembly instructions as seen in White because this modification would improve surgical outcomes and customize surgery for a patient ([0004] of White) Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy, KOENIG and White teaches wherein the second configuration is a stowage configuration and the virtual image includes a virtual animation of the medical component being arranged in the stowage configuration. 5. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) further in view of Lau et al, U.S Patent Application Publication No.2021/0153966 (“Lau”) further in view of White et al, U.S Patent Application Publication No.2018/0082480 (“White”) Regarding claim 12, AZIZIAN, Ryan, Fuerst, Garcia Kilroy teach the system of claim 1, wherein the second configuration is a draping configuration and the virtual image includes a virtual animation of the medical component being arranged in the draping configuration (see at least [0022];[0027] , [0038]of AZIZIAN FIG. IB is a perspective view of one embodiment of a teleoperational assembly 12 which may be referred to as a patient side cart. The patient side cart 12 shown provides for the manipulation of three surgical tools 30a, 30b, 30c (e.g., instrument systems 14) and an imaging device 28 (e.g., endoscopic imaging system 15), such as a stereoscopic endoscope used for the capture of images of the site of the procedure. The imaging device may transmit signals over a cable 56 to the control system 20. Manipulation is provided by teleoperative mechanisms having a number of joints. The imaging device 28 and the surgical tools 30a-c can be positioned and manipulated through incisions in the patient or through a natural orifice (e.g., oral cavity) so that a kinematic remote center is maintained at the entry to minimize the size of the incision or to avoid damage to the natural orifice boundaries. Images of the surgical environment within the patient anatomy can include images of the distal ends of the surgical tools 30a-c when they are positioned within the field-of-view of the imaging device 28. [0028] The patient side cart 22 includes a drivable base 58. The drivable base 58 is connected to a telescoping column 57, which allows for adjustment of the height of the arms 54. The arms 54 may include a rotating j oint 55 that both rotates and moves up and down. Each of the arms 54 may be connected to an orienting platform 53. The orienting platform 53 may be capable of 360 degrees of rotation. The patient side cart 22 may also include a telescoping horizontal cantilever 52 for moving the orienting platform 53 in a horizontal direction”; [0096] of Ryan; [0029] of Fuerst ). In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst, Garcia Kilroy rare understood to be silent on the remaining limitations of claim 12. In the same field of endeavor, Lau teaches wherein the second configuration is a draping configuration (see at least [0151] Additionally, the adjustable arm support 250 may also be draped. The adjustable arm support 250 can be particularly challenging as the adjustable arm support 250 can support a plurality of robotic arms 210 which can linearly translate relative to the length of the adjustable arm support 250. To maintain sterility, the robotic arms 210 and the adjustable arm support 250 can be draped simultaneously and with a single drape 300. However, draping both the plurality of robotic arms and the adjustable arm support adds complexity to the shape and design of the drape configuration, in particular in designing the drape to maintain sterility during the draping process. The drape 300 should be able to accommodate the motion of the plurality of robotic arms 210 linearly relative to the length of the adjustable arm support 250, as well motion of each robotic arm 210 in several degrees of motion. As will be described more below, the drape 300 can be multiple times longer (e.g., at least two, three, or four times longer) than the adjustable arm support 250 to accommodate the motion of the plurality of robotic arms 210 relative to the surface of the adjustable arm support 250.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst, Garcia Kilroy with applying drape configuration as seen in Lau because this modification would maintain sterility ([0151] of Lau) AZIZIAN, Ryan, Fuerst, Garcia Kilroy and Lau are understood to be silent on the remaining limitations of claim 12. In the same field of endeavor, White teaches the virtual image includes a virtual animation of the medical component being arranged in the draping configuration (see at least abstract “A system and method for using augmented reality device for use during a surgical procedure are described. A system may include an augmented reality device to present a virtual indication, such as a virtual indication of a surgical instrument, a force vector, a direction, or the like. The augmented reality device may present a virtual aspect of a procedure, such as a virtual animation of a step of a procedure. The augmented reality device may present a virtual object, indication, aspect, etc., within a surgical field while permitting the surgical field or aspects of the surgical field to be viewed through the augmented reality display (e.g., presenting virtual objects mixed with real objects).”; [0042] The contents of some cases may require assembly before a surgical procedure begins. The operator may be prompted on the AR display with instructions for opening the case and locating pieces for the item to be assembled. The AR display may present three-dimensional representations of the pieces or the location of the pieces, such displaying a three-dimensional representation of the piece above where the piece is physically located. Through object recognition or scanning identification tags on a piece (e.g., a bar code, a QR code), the AR device may determine that all necessary pieces have been located. The AR display may present instructions for assembling the item. The instruction may include a list of textual steps, animations, video, or the like. The instructions may be animated steps using the three-dimensional representations of the pieces. When the operator completes the assembly, the AR display may provide a three-dimensional representation of the assembled item such that the operator may confirm the physical assembled item resembles the representation. The operator may then be provided instructions on the AR display for the placement of the assembled item with the surgical procedure room.”; [0046]; [0051] “0051] FIGS. 4A-4C illustrate augmented reality surgical instrument assembly displays 400A-400C in accordance with some embodiments. The AR displays 400A-400C may be used to present virtual representations of portions of a surgical instrument for assembly instructions. For example, FIG. 4A illustrates a series of virtual representations. For example, a first virtual representation 402 illustrates instruments to be assembled. A second virtual representation 404 illustrates a first action to assemble one of the instruments. A third virtual representation 406 illustrates a second action to assemble another of the instruments. The AR display 400A may present each of the virtual representations 402-406, for example in order, such that the corresponding surgical instruments may be assembled. In an example the virtual representation 406 may be displayed after a detection device identifies that an assembly operation corresponding to the virtual representation 404 has been completed (e.g., by a user viewing the AR display 400A). The virtual representations 402-406 may proceed automatically as corresponding assembly is completed for a real surgical instrument, or a surgeon may cause (e.g., by selecting a virtual user indication or performing a gesture) a red light or green light (or other visual indicator) of whether to proceed, to be displayed. The AR displays 400B and 400C illustrate other techniques that may be illustrated using virtual representations to allow a user to assemble an instrument. Other techniques or assembly of instruments may be displayed using virtual representations. The AR displays 400A-400C may include animation or video to illustrate assembly instructions.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst, Garcia Kilroy i and applying drape configuration of Lau with presenting virtual representations of portions of a surgical instrument for assembly instructions as seen in White because this modification would improve surgical outcomes and customize surgery for a patient ([0004] of White) Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy, Lau and White teaches wherein the second configuration is a draping configuration and the virtual image includes a virtual animation of the medical component being arranged in the draping configuration. 6. Claims 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) further in view of White et al, U.S Patent Application Publication No.2018/0082480 (“White”) Regarding claim 13, AZIZIAN, Ryan, Fuerst, Garcia Kilroy teach the system of claim 1, wherein the virtual image (see at least: [0038] of AZIZIAN; 0094]-[0096] of Ryan) In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst, Garcia Kilroy are understood to be silent on the remaining limitations of claim 13. In the same field of endeavor, White teaches wherein the virtual image includes a virtual animation of a procedure step (see at least abstract “A system and method for using augmented reality device for use during a surgical procedure are described. A system may include an augmented reality device to present a virtual indication, such as a virtual indication of a surgical instrument, a force vector, a direction, or the like. The augmented reality device may present a virtual aspect of a procedure, such as a virtual animation of a step of a procedure. The augmented reality device may present a virtual object, indication, aspect, etc., within a surgical field while permitting the surgical field or aspects of the surgical field to be viewed through the augmented reality display (e.g., presenting virtual objects mixed with real objects). “; [0042] The contents of some cases may require assembly before a surgical procedure begins. The operator may be prompted on the AR display with instructions for opening the case and locating pieces for the item to be assembled. The AR display may present three-dimensional representations of the pieces or the location of the pieces, such displaying a three-dimensional representation of the piece above where the piece is physically located. Through object recognition or scanning identification tags on a piece (e.g., a bar code, a QR code), the AR device may determine that all necessary pieces have been located. The AR display may present instructions for assembling the item. The instruction may include a list of textual steps, animations, video, or the like. The instructions may be animated steps using the three-dimensional representations of the pieces. When the operator completes the assembly, the AR display may provide a three-dimensional representation of the assembled item such that the operator may confirm the physical assembled item resembles the representation. The operator may then be provided instructions on the AR display for the placement of the assembled item with the surgical procedure room.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst, Garcia Kilroy with present virtual animation of a step of a procedure as seen in White because this modification would improve surgical outcomes and customize surgery for a patient ([0004] of White) Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy, White teaches wherein the virtual image includes a virtual animation of a procedure step. Regarding claim 15, AZIZIAN, Ryan, Fuerst, Garcia Kilroy teach the system of claim 14, wherein the virtual image includes a virtual for the medical component and the auxiliary component (see at least [0038] of AZIZIAN ; [0096], [018] of Ryan, Fig.5 of Fuerst, Fig.5B of Garcia Kilroy) In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst, Garcia Kilroy are understood to be silent on the remaining limitations of claim 15. In the same field of endeavor, White teaches wherein the virtual image includes a virtual animation of a set-up procedure for the medical component and the auxiliary component (see at least abstract “A system and method for using augmented reality device for use during a surgical procedure are described. A system may include an augmented reality device to present a virtual indication, such as a virtual indication of a surgical instrument, a force vector, a direction, or the like. The augmented reality device may present a virtual aspect of a procedure, such as a virtual animation of a step of a procedure. The augmented reality device may present a virtual object, indication, aspect, etc., within a surgical field while permitting the surgical field or aspects of the surgical field to be viewed through the augmented reality display (e.g., presenting virtual objects mixed with real objects).”;[0046]; [0051],[0054] The technique 500 may include displaying patient or procedure information, for example in a heads-up portion of the AR display. In an example, a series of virtual components may be presented, such as to provide instruction for assembly of the surgical instrument. The series of virtual components may be displayed in progression automatically, such as in response to detecting (e.g., automatically using a detection device or from a user input) that an operation corresponding to one of the series of virtual components is complete. The system may be able to automatically detect the different instruments being used during a procedure by shape recognition, RFID tags, bar codes, or similar tagging or recognition methods.”; [0042] The contents of some cases may require assembly before a surgical procedure begins. The operator may be prompted on the AR display with instructions for opening the case and locating pieces for the item to be assembled. The AR display may present three-dimensional representations of the pieces or the location of the pieces, such displaying a three-dimensional representation of the piece above where the piece is physically located. Through object recognition or scanning identification tags on a piece (e.g., a bar code, a QR code), the AR device may determine that all necessary pieces have been located. The AR display may present instructions for assembling the item. The instruction may include a list of textual steps, animations, video, or the like. The instructions may be animated steps using the three-dimensional representations of the pieces. When the operator completes the assembly, the AR display may provide a three-dimensional representation of the assembled item such that the operator may confirm the physical assembled item resembles the representation. The operator may then be provided instructions on the AR display for the placement of the assembled item with the surgical procedure room.”) In addition, the same motivation is used as the rejection for claim 13. Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy and White teaches wherein the virtual image includes a virtual animation of a set-up procedure for the medical component and the auxiliary component. 7. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) further in view of Fuerst et al., U.S Patent Application Publication No.20210121233 (“Fuerst”) further in view of Garcia Kilroy et al, U.S Patent Application Publication No.20190005848 (“Garcia Kilroy”) further in view of LEVIN et al, U.S Patent Application Publication No.2023/0363821 (“LEVIN”) Regarding claim 16, AZIZIAN, Ryan, Fuerst, Garcia Kilroy teach the system of claim 1, wherein the virtual image includes a patient, and wherein the virtual image including the patient and the medical component (see at least [0038] of AZIZIAN ;[0108] of Ryan “FIG. 24 depicts an exemplary embodiment of a MXUI shown to the user 106 via the display device 104 during positioning of the acetabular shell of a hip replacement procedure wherein a virtual target 2400 for the acetabular impactor assembly 1100 and a virtual shell 2402 are shown. Placement of the acetabular impactor assembly 1100 is guided by manipulating it to align with the virtual target 2400. The posterior/lateral quadrant of the shell portion of the virtual target may be displayed in a different color or otherwise visually differentiated from the rest of the shell 2402 to demarcate to the user 106 a target for safe placement of screws into the acetabulum. The numerical angle of the acetabular impactor and the depth of insertion relative to the reamed or un-reamed acetabulum are displayed numerically as virtual text 2404. A magnified stereoscopic image (not shown) similar to 2202 centered on the tip of the impactor may be displayed showing how the virtual shell interfaces with the acetabulum of the virtual pelvis 2102.”, Fig.5 of Fuerst, Fig. 5B of Garcia Kilroy) In addition, the same motivation is used as the rejection for claim 1. AZIZIAN, Ryan, Fuerst, Garcia Kilroy are understood to be silent on the remaining limitation of claim 16. In the same field of endeavor, LEVIN teaches wherein the virtual image includes a patient, and wherein the virtual image includes a virtual animation including the patient and the medical component(see at least [0143] At optional step 306, an animation of a patient on a patient bed in the procedure room may be displayed. In some embodiments, the patient may be displayed in the animation already having an automated (robotic) medical device mounted thereon, or in close proximity, thereto. [0162] According to some embodiments, at optional step 418, the method may include displaying an animation of a virtual instrument being steered by a virtual robot to the next checkpoint or to the target. According to some embodiments, the animation may include the selected medical instrument. According to some embodiments, the animation may be stored in a memory module, for example memory module 108. According to some embodiments, the animation may be generated in real-time using algorithm(s). According to some embodiments, the animation may include a virtual medical instrument being steered by a virtual automated medical device from one checkpoint to the next along the trajectory. According to some embodiments, the method may include displaying the advancement of the instrument until target is reached. According to some embodiments, the animation of the virtual instrument being advanced may be shown on a cross-sectional view of the virtual patient, which may correspond to the image-view presented on the display.”; [0200] FIG. 7F is a screenshot of GUI 70 and an animation window 72, which shows an animation of a patient lying on the patient bed with an automated device 770, for example automated device 60 shown in FIG. 6A, attached to the patient's body. According to some embodiments, as shown in FIG. 7F, the automated device 770 may be attached to the patient's body using an attachment apparatus 775, such as the mounting base/attachment frame disclosed in abovementioned U.S. Pat. No. 11,103,277 or U.S. Patent Application Publication No. 2021/228,311.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN, Ryan, Fuerst, Garcia Kilroy with displaying virtual animation including the patient and the medical instrument as seen in LEVIN because this modification would increase efficiency and safety of the medical procedures involved ([0007] of LEVIN) Thus, the combination of AZIZIAN, Ryan, Fuerst, Garcia Kilroy and LEVIN teaches wherein the virtual image includes a patient, and wherein the virtual image includes a virtual animation including the patient and the medical component. 8. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over AZIZIAN et al, WO2019/006028 (“AZIZIAN”) in view of Ryan et al, U.S Patent Application Publication No.2018/0049622 (“Ryan”) Regarding independent claim 4, AZIZIAN teaches a system ([0020] Referring to FIG.1 A of the drawings, a teleoperational medical system for use in, for example, medical procedures including diagnostic, therapeutic, or surgical procedures, is generally indicated by the reference numeral 10. As will be described, the teleoperational medical systems of this disclosure are under the teleoperational control of a surgeon. In alternative embodiments, a teleoperational medical system may be under the partial control of a computer programmed to perform the procedure or sub-procedure. In still other alternative embodiments, a fully automated medical system, under the full control of a computer”) comprising: a processor; and a memory having computer readable instructions stored thereon, the computer readable instructions, when executed by the processor, ([0023] The teleoperational medical system 10 also includes a control system 20. The control system 20 includes at least one memory 24 and at least one processor 22, and typically a plurality of processors, for effecting control between the medical instrument system 14, the operator input system 16, and other auxiliary systems 26 which may include, for example, imaging systems, audio systems, fluid delivery systems, display systems, illumination systems, steering control systems, irrigation systems, and/or suction systems. The control system 20 can be used to process the images of the surgical environment from the imaging system 15 for subsequent display to the surgeon S through the surgeon's console 16. The control system 20 also includes programmed instructions (e.g., a computer-readable medium storing the instructions) to implement some or all of the methods described in accordance with aspects disclosed herein”) cause the system to: receive an image of a medical environment ([0022],[0038] “At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those”); identify a medical component in the image of the medical environment, the medical component disposed in a first configuration ([0038] At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND F AGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. In some embodiments, the endoscopic image 152 may be distinguishable from the anatomic model image 154 by a border illustration, color differences (e.g., the endoscopic image in color, and the anatomic model image in grayscale), in brightness, or other distinguishing characteristic”); receive kinematic information about the medical component (see at least [0022]; [0038] At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND F AGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. In some embodiments, the endoscopic image 152 may be distinguishable from the anatomic model image 154 by a border illustration, color differences (e.g., the endoscopic image in color, and the anatomic model image in grayscale), in brightness, or other distinguishing characteristics”); and generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration (see at least [0038] At a process 108, the three-dimensionally mapped endoscopic image dataset is used to generate and display an image from a vantage point that may assist the surgeon. For example, an initial vantage point may be the viewpoint from the distal end of the endoscope within the patient anatomy (i.e., the conventional endoscopic view from the endoscope). The initial vantage point image may be constructed based on the three dimensional point cloud. FIG. 3 illustrates a display 150 visible by a surgeon at the operator console or on another display screen. The display 150 includes an endoscopic image 152 of a surgical environment including instruments 156, 157, 158. The image 152 is generated from the mapped endoscopic image dataset. Optionally, the display 150 includes an anatomic model image 154 generated from the anatomic model dataset. In this example, the endoscopic image 152 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 154. The display 150 also includes instrument illustrations 156', 157', 158' overlaid or superimposed on the anatomic model image 154. The instrument illustrations 156', 157', 158' may be generated based on instrument pose and relative scale, known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND F AGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety. The display 150 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 152 alone. For example, the anatomic model image 154 illustrates additional organs, bones, and tumors not visible on the endoscopic image 152, and the instrument illustrations 156', 157', 158' provide information about instrument trajectories not visible on the endoscopic image. In some embodiments, the endoscopic image 152 may be distinguishable from the anatomic model image 154 by a border illustration, color differences (e.g., the endoscopic image in color, and the anatomic model image in grayscale), in brightness, or other distinguishing characteristics”; [0041] Optionally, the display 160 includes an anatomic model image 164 generated from the anatomic model dataset. In this example, the endoscopic image 162 is overlaid, superimposed, blended or otherwise mapped onto the anatomic model image 164. The display 160 also includes an instrument illustration 166 overlaid or superimposed on the anatomic model image 164. The instrument illustration 166 may be generated based on instrument pose and scaling known or determined from kinematic chain, position or shape sensor information, vision-based tracking or a combination of those. The display 160 thus provides the surgeon with additional environmental context beyond that provided by the endoscopic image 162 alone. For example, the anatomic model image 164 illustrates additional organs and bones not visible on the endoscopic image 162, and the instrument illustration 166 provides information about instrument trajectories not visible on the endoscopic image. From the second vantage point, the surgeon is able to observe a more expansive anatomical area while still maintaining awareness of the direct view from the endoscope. The display of the anatomic model image 164 is optional because in some examples, the endoscopic image 162 alone, presenting the three-dimensionally mapped endoscopic image dataset from the second vantage point, may provide the surgeon with sufficient spatial perspective.), wherein the medical component is a robot-assisted manipulator assembly (see at least [0020][0022] The teleoperational assembly 12 supports and manipulates the medical instrument system 14 while the surgeon S views the surgical site through the console 16. An image of the surgical site can be obtained by the endoscopic imaging system 15, such as a stereoscopic endoscope, which can be manipulated by the teleoperational assembly 12 to orient the endoscope 15. The number of medical instrument systems 14 used at one time will generally depend on the diagnostic or surgical procedure and the space constraints within the operating room among other factors. The teleoperational assembly 12 may include a kinematic structure of one or more non-servo controlled links (e.g., one or more links that may be manually positioned and locked in place, generally referred to as a set-up structure) and a teleoperational manipulator. The teleoperational assembly 12 includes a plurality of motors that drive inputs on the medical instrument system 14. These motors move in response to commands from the control system (e.g., control system 20). The motors include drive systems which when coupled to the medical instrument system 14 may advance the medical instrument into a naturally or surgically created anatomical orifice. Other motorized drive systems may move the distal end of the medical instrument in multiple degrees of freedom, which may include three degrees of linear motion (e.g., linear motion along the X, Y, Z Cartesian axes) and in three degrees of rotational motion (e.g., rotation about the X, Y, Z Cartesian axes). Additionally, the motors can be used to actuate an articulable end effector of the instrument for grasping tissue in the jaws of a biopsy device or the like. Instruments 14 may include end effectors having a single working member such as a scalpel, a blunt blade, an optical fiber, or an electrode. Other end effectors may include, for example, forceps, graspers, scissors, or clip appliers.”, Further, Arizian also teaches at least [0027] FIG. IB is a perspective view of one embodiment of a teleoperational assembly 12 which may be referred to as a patient side cart. The patient side cart 12 shown provides for the manipulation of three surgical tools 30a, 30b, 30c (e.g., instrument systems 14) and an imaging device 28 (e.g., endoscopic imaging system 15), such as a stereoscopic endoscope used for the capture of images of the site of the procedure. The imaging device may transmit signals over a cable 56 to the control system 20” where teleoperational assembly supports and manipulates the medical instrument system which is considered as robot-assisted manipulator assembly.; [0038] …Various tool tracking techniques have been described, for example in U.S. Patent Application No. 11/865,014, filed September 30, 2007, disclosing "METHODS AND SYSTEMS FOR ROBOTIC INSTRUMENT TOOL TRACKING WITH ADAPTIVE FUSION OF KINEMATICS INFORMATION AND IMAGE INFORMATION" and U.S. Patent Application No. 11/130,471, filed May 16, 2005. disclosing "Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery," which are incorporated by reference herein their entirety”) AZIZIAN is understood to be silent on remaining limitation virtual guidance. In the same field of endeavor, Ryan teaches receive kinematic information about the medical component (see at least [0079] Referring to FIG. 5, the one or more cameras 500, 506 of the sensor suites (400, 422, 210, and 306) and the one or more visual markers 502, 504 are used to visually track a distinct object (e.g., a surgical tool, a desired location within an anatomical object, etc.) and determine attitude and position relative to the user 106. In one embodiment, each of the one or more markers is distinct and different from each other visually. Standalone object recognition and machine vision technology can be used for marker recognition. Alternatively, the present invention also provides for assisted tracking using IMUs 408 on one or more objects of interest, including but not limited to the markers 502, 504. Please note that the one or more cameras 500, 506 can be remotely located from the user 106 and provide additional data for tracking and localization.”); generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration (see at least [0096] FIG. 7 depicts an alternate view of the MXUI previously shown in FIG. 6, wherein a virtual target 700 and a virtual tool 702 are presented to the user 106 for easy use in achieving the desired version and inclination. In this embodiment, further combinations of virtual reality are used to optimize the natural feeling experience for the user by having a virtual target 700 with actual tool 702 fully visible or a virtual tool (not shown) with virtual target fully visible. Other combinations of real and virtual imagery can optionally be provided. Presentation of data can be in readable form 704 or in the form of imagery including but not limited to 3D representations of tools or other guidance forms.”; [0107] FIG. 23 depicts an exemplary embodiment of a MXUI shown to the user 106 via the display device 104 during resection of the femoral neck of a hip replacement procedure with a virtual resection guide 2300. A sagittal saw 2302 is shown having a plurality of fiducials 2304 defining a marker, allows the pose of the sagittal, saw 2302 to be tracked. Resection of the femoral neck can be guided either by lining up the actual saw blade 2306 with the virtual resection guide 2300 in the case where the drill is not tracked or by lining up a virtual saw blade (not shown) with the virtual resection guide 2300 in the case where the saw 2302 is tracked. As with the tracked drill shown in FIG. 20, the angles of the saw 2302 may be displayed numerically if the saw 2302 is tracked. These angles could be displayed relative to the pelvic reference frame or the femoral reference frame.[0108] FIG. 24 depicts an exemplary embodiment of a MXUI shown to the user 106 via the display device 104 during positioning of the acetabular shell of a hip replacement procedure wherein a virtual target 2400 for the acetabular impactor assembly 1100 and a virtual shell 2402 are shown. Placement of the acetabular impactor assembly 1100 is guided by manipulating it to align with the virtual target 2400. The posterior/lateral quadrant of the shell portion of the virtual target may be displayed in a different color or otherwise visually differentiated from the rest of the shell 2402 to demarcate to the user 106 a target for safe placement of screws into the acetabulum. The numerical angle of the acetabular impactor and the depth of insertion relative to the reamed or un-reamed acetabulum are displayed numerically as virtual text 2404. A magnified stereoscopic image (not shown) similar to 2202 centered on the tip of the impactor may be displayed showing how the virtual shell interfaces with the acetabulum of the virtual pelvis 2102.” ” Where provide virtual guidance based on the posture and position of the medical component (tool), including the presentation of data in human-readable format (704) and a 3D representation of the tool. In addition, the virtual target (700) is a virtual image of the medical component and is arranged at a position different from that of the virtual tool (700), and corresponds to the arrangement of the second setting) wherein a robot-assisted ([0003] Current medical procedures are typically performed by a surgeon or medical professional with little or no assistance outside of the required tools to affect changes on the patient. For example, an orthopedic surgeon may have some measurement tools (e.g. rulers or similar) and cutting tools (e.g. saws or drills), but visual, audible and tactile inputs to the surgeon are not assisted. In other words, the surgeon sees nothing but what he or she is operating on, hears nothing but the normal communications from other participants in the operating room, and feels nothing outside of the normal feedback from grasping tools or other items of interest in the procedure. Alternatively, large console type navigation or robotic systems are utilized in which the display and cameras are located outside the sterile field away from the surgeon. These require the surgeon to repeatedly shift his or her gaze between the surgical site and the two-dimensional display. Also, the remote location of the cameras introduces line-of-sight issues when drapes, personnel or instruments obstruct the camera's view of the markers in the sterile field and the vantage point of the camera does not lend itself to imaging within the wound. Anatomic registrations are typically conducted using a stylus with markers to probe in such a way that the markers are visible to the cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to the method of using a mixed reality surgical navigation system for a medical procedures of AZIZIAN with generating virtual guidance based on the posture and position of the medical component as seen in Ryan because this modification would assist in the performance of a medical procedure ([0005] of Ryan) Thus, the combination of AZIZIAN and Ryan teaches a system comprising: a processor; and a memory having computer readable instructions stored thereon, the computer readable instructions, when executed by the processor, cause the system to: receive an image of a medical environment; identify a medical component in the image of the medical environment, the medical component disposed in a first configuration; receive kinematic information about the medical component; and generate virtual guidance based on the kinematic information, the virtual guidance including a virtual image of the medical component disposed in a second configuration, wherein the medical component is a robot-assisted manipulator assembly. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARAH LE whose telephone number is (571)270-7842. The examiner can normally be reached Monday: 8AM-4:30PM EST, Tuesday: 8 AM-3:30PM EST, Wednesday: 8AM-2:30PM EST, Thursday and Friday off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SARAH LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §103, §112
Sep 22, 2025
Applicant Interview (Telephonic)
Sep 22, 2025
Examiner Interview Summary
Oct 13, 2025
Response Filed
Feb 12, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569321
PROPOSING DENTAL RESTORATION MATERIAL PARAMETERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573128
Progressive Compression of Geometry for Graphics Processing
2y 5m to grant Granted Mar 10, 2026
Patent 12536715
GENERATION OF STYLIZED DRAWING OF THREE-DIMENSIONAL SHAPES USING NEURAL NETWORKS
2y 5m to grant Granted Jan 27, 2026
Patent 12505585
SYSTEMS AND METHODS FOR OVERLAY OF VIRTUAL OBJECT ON PROXY OBJECT
2y 5m to grant Granted Dec 23, 2025
Patent 12505590
NODE LIGHTING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month