Prosecution Insights
Last updated: April 19, 2026
Application No. 19/360,022

SURGICAL DISPLAY

Non-Final OA §103
Filed
Oct 16, 2025
Examiner
CHIN, MICHELLE
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Xenco Medical LLC
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
540 granted / 634 resolved
+23.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
70.6%
+30.6% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 634 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) submitted on 10/16/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lang (US 2021/0137634 A1) in view of Verard et al. (US 2014/0282008 A1). 7. With reference to claim 1, Lang teaches A method of surgical analysis, comprising obtaining surgeon hand positional information, (“The present disclosure relates to devices and methods for performing various interventional or surgical procedures with visual guidance using an optical head mounted display with cardiac and/or respiratory gating of the displayed virtual data.” [0002] “the virtual image is a three-dimensional digital representation corresponding to at least one portion of a physical anatomical target for surgical or other medical intervention.” [0006] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang also teaches obtaining a patient intervention site dataset, (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “The derived position and/or orientation of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) can be overlaid on a 3D dataset, e.g. a 3D dataset acquired prior to or during the interventional or surgical procedure. The 3D dataset, including the overlaid catheter position, can be displayed by the one or more OHMD's.” [1046]) Lang further teaches applying the surgeon hand positional information to the patient intervention site dataset (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical hand position on a patient intervention site. This is what Verard teaches (“a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. PNG media_image1.png 542 415 media_image1.png Greyscale 8. With reference to claim 2, Lang teaches the patient intervention site dataset is malleable such that a depiction of the patient intervention site may be computationally adjusted to simulate an effect of the surgeon hand positional information without reobtaining patient intervention site dataset data. (“tissue deformation, a shape change or removal of tissue caused by the surgery or surgical instruments can be simulated in the virtual data. The resultant simulated virtual data can then be registered related to the live patient data, either before and/or after deformation, alteration of shape or removal of tissue of the live patient. The tissue deformation, shape change or removal of tissue caused by the surgery or surgical instruments can include the shape alteration or removal of one or more osteophytes or bone spurs or other bony anatomy or deformity. The virtual data of the patient and the live data of the patient can be registered in a common coordinate system, for example with one or more OHMDs. … the registration of virtual patient data and live patient data using the techniques described herein can be repeated after one or more surgical steps have been performed. In this case, the surgically altered tissue or tissue surface or tissue contour or tissue perimeter or tissue volume or other tissue features in the live patient can be matched to, superimposed onto and/or registered with the surgically altered tissue or tissue surface or tissue contour or tissue perimeter or tissue volume or other tissue features in the virtual data of the patient, e.g. in a virtual surgical plan developed for the patient. The matching, superimposing and/or registering of the live data of the patient and the virtual data of the patient after the surgical tissue alteration can be performed using the same techniques described in the foregoing or any of the other registration techniques described in the specification or any other registration technique known in the art. Re-registration of live patient data and virtual patient data can be particularly helpful if the surgical alteration or surgical step has led to some tissue deformation. For example, the re-registration can be performed by matching, superimposing, and/or registering tissues that have not been performed by the surgical step or surgical alteration. Alternatively, the re-registration can be performed by matching, superimposing and/or registering deformed live patient data, e.g. from surgically deformed tissue, with virtual patient data that simulate the same tissue deformation after the virtual surgical step, e.g. an osteophyte or tissue removal. “ [0498-0499] “Registration can be performed, for example, based on motion data, kinematic data in the live data which can then be registered to an estimate or simulated center of rotation in the virtual data of the patient). Registration can be performed using metabolic data, for example using an area of high 18 FDG-PET uptake in a PET scan or PET-MRI or PET CT, which can be, for example matched to an area of increased body temperature in a target surgical site. Registration can be performed using functional data, e.g. using functional MRI studies.” [0510] “The OHMD can display virtual alterations to a surgical site superimposed onto the live surgical site prior to the physical alteration of the live surgical site. The virtual alterations to a surgical can be simulated using a virtual surgical plan.” [0877] “Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “Catheter or device tracking methods that work independently from the x-ray fluoroscopy imaging, e.g. MRI-based tracking or electromagnetic tracking, do not require continuous updates of the x-ray image. Information about 3D position and orientation of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) can be available continuously and in real-time.” [1049]) 9. With reference to claim 3, Lang teaches obtaining surgeon hand positional information comprises recording surgeon hand position pursuant to performance of a surgery on a patient. (“While the OHMD is placed in the fixed position, live data can be viewed by the surgeon or interventionalist and they can be, optionally recorded with a camera and/or displayed on a monitor. Virtual data can then be superimposed and the matching and registration of virtual data and live data can be performed.” [0269] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) 10. With reference to claim 4, Lang teaches obtaining surgeon hand positional information comprises obtaining positional information for positional markers that mark surgeon hand position. (“3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) 11. With reference to claim 5, Lang teaches obtaining the patient intervention site dataset comprises obtaining at least one x-ray. (“This can also be accomplished by attaching one or more markers (e.g. retroreflective markers or markers with RF emitters for surgical navigation, or markers, e.g. with geometric patterns, for video imaging) to the x-ray imaging system that can also be tracked by the OHMD. Any relative movement between the markers attached to the table and the markers attached to the x-ray imaging system can then be measured and used for updating the registration parameters. The markers can optionally be radiopaque or include radiopaque elements. Any of the foregoing techniques, e.g. image based registration of 3D models and 2D or 3D angiograms and registration using table movement and related coordinates and/or marker movement can be combined.” [1038] “If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “a 3D-2D registration can be performed to match a 3D model of the marker(s) on the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) with the projected marker of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) on the x-ray image. The virtual 3D model of the marker(s) and/or the catheter and/or catheter tip or other device, e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device, can then be displayed by one or more OHMDs, for example superimposed on a vascular structure or vascular tree or other structures of the patient displayed by a standalone or separate computer monitor or displayed by the OHMD. Alternatively, a search for the projected marker of the catheter and/or catheter tip or other device (e.g. a guidewire, sheath, stent, coil, instrument, implant, vascular prosthesis or other intra-vascular or endoluminal instrument and/or device) on the x-ray image can be performed by cross correlation of a tip template with the pixels in the x-ray image. After this step, the position and direction of the catheter tip in the 2D coordinate system of the x-ray image can be known. Biplanar angiography systems can utilize two separate x-ray sources in a fixed and known spatial configuration and allow for simultaneous visualization of two imaging planes.” [1045]) 12. With reference to claim 6, Lang teaches obtaining the patient intervention site dataset comprises obtaining MRI data. (“A 3D model of the vasculature and/or the heart can be generated using pre-operative 3D imaging data, for example a computed tomography (CT) scan or magnetic resonance imaging (MRI) scan. Preferably, the pre-operative 3D images are acquired using vascular enhancement such as CT angiography, e.g. spiral CT angiography, (with use of contrast media) or MR angiography with injection of a contrast agent or use of contrast enhancing MRI pulse sequences.” [1027] “If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043]) 13. With reference to claim 7, Lang teaches obtaining the patient intervention site dataset comprises obtaining at least one patient image. (“Pre-operative, intra-operative or post-operative images of the patient can be acquired 240. The image data can optionally be segmented 241. 3D reconstructions of the patient's anatomy or pathology including multiple different tissues, e.g. using different colors or shading, can be generated 242. Virtual 3D models of surgical instruments and devices components can be generated which can include their predetermined position, location, rotation, orientation, alignment and/or direction 243. The virtual 3D models can be registered, for example in relationship to the OHMD and the patient 244. The virtual 3D models can be registered relative to the live patient data 245.” [0796] “If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043]) 14. With reference to claim 8, Lang teaches obtaining the patient intervention site dataset comprises imaging a patient prior to a surgery on the patient. (“Pre-operative, intra-operative or post-operative images of the patient can be acquired 240. The image data can optionally be segmented 241. 3D reconstructions of the patient's anatomy or pathology including multiple different tissues, e.g. using different colors or shading, can be generated 242. Virtual 3D models of surgical instruments and devices components can be generated which can include their predetermined position, location, rotation, orientation, alignment and/or direction 243. The virtual 3D models can be registered, for example in relationship to the OHMD and the patient 244. The virtual 3D models can be registered relative to the live patient data 245.” [0796] “If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043]) 15. With reference to claim 9, Lang teaches obtaining the patient intervention site dataset comprises receiving a patient intervention site dataset from a patient upon whom surgery at the patient intervention site was previously completed. (“The surgeon or interventionalist can optionally store each surgical instrument or implantable that has been scanned in this manner in a virtual library of surgical instruments or implantables. The virtual surgical instruments or implantables stored in this manner can be named and stored for future use in subsequent surgical procedures in other patients. … the surgeon or interventionalist can use the virtual data of the surgical instrument or implantables that were previously generated in a new surgical plan for another, new patient.” [0789-0790] “Pre-operative, intra-operative or post-operative images of the patient can be acquired 240. The image data can optionally be segmented 241. 3D reconstructions of the patient's anatomy or pathology including multiple different tissues, e.g. using different colors or shading, can be generated 242. Virtual 3D models of surgical instruments and devices components can be generated which can include their predetermined position, location, rotation, orientation, alignment and/or direction 243. The virtual 3D models can be registered, for example in relationship to the OHMD and the patient 244. The virtual 3D models can be registered relative to the live patient data 245.” [0796] “If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043]) 16. With reference to claim 10, Lang teaches the patient intervention site dataset is suitable for a three dimensional depiction. (“If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043]) 17. With reference to claim 11, Lang teaches applying the surgeon hand positional information to the patient intervention site dataset comprises calculating a 3-dimensional image of the patient intervention site. (“Calculate 3D transformation of first 3D vascular tree representation using transformation T” [1398] “Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “A processor receives the electrical signals and calculates three-dimensional position information of tissue surfaces based on changes in the relative phase of the emitted optical radiation and the received optical radiation scattered by the surfaces.” [0252] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical hand position on a patient intervention site. This is what Verard teaches (“a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 18. With reference to claim 12, Lang teaches applying the surgeon hand positional information to the patient intervention site dataset (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical hand position on a patient intervention site comprises projecting into space the 3-dimensional image of the impact of surgical hand position. This is what Verard teaches (“A localization system 120 includes a coordinate system 122 to which a holographic image or hologram 124 is registered. The localization system 120 may also be employed to register a monitored object 128, which may include virtual instruments, which are separately created and controlled, real instruments, a physician's hands, fingers or other anatomical parts, etc. The localization system 120 may include an electromagnetic tracking system, a shape sensing system, such as a fiber optic based shape sensing system, an optical sensing system, including light sensors and arrays, or other sensing modality, etc. The localization system 120 is employed to define spatial regions in and around the hologram or the holographic image 124 to enable a triggering of different functions or actions as a result of movement in the area of the hologram 124. For example, dynamic locations of a physician's hands may be tracked using a fiber optic shape sensing device. When the physician's hands enter the same space, e.g., a monitored space 126 about a projected hologram 124, the intensity of the hologram may be increased. In another example, the physician's hand movements may be employed to spatially alter the position or orientation of the hologram 124 or to otherwise interact with the hologram 124.” [0034] “a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 19. With reference to claim 13, Lang teaches obtaining surgeon hand positional information further comprises obtaining surgical insert positional information. (“3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305]) 20. With reference to claim 14, Lang teaches applying the surgeon hand positional information to the patient intervention site dataset further comprises applying the surgical insert positional information to the patient intervention site dataset (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical hand position on a patient intervention site comprises assess impact of the surgical insert on the patient intervention site. This is what Verard teaches (“a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 21. With reference to claim 15, Lang teaches the applying the surgical insert positional information to the patient intervention site dataset (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of the surgical insert on the patient intervention site further comprises selecting the surgical insert based upon assessing the impact of the surgical insert. This is what Verard teaches (“a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 22. With reference to claim 16, Lang teaches applying the surgeon insert positional information to the patient intervention site dataset comprises calculating a 3-dimensional image of the patient intervention site. (“Calculate 3D transformation of first 3D vascular tree representation using transformation T” [1398] “Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “A processor receives the electrical signals and calculates three-dimensional position information of tissue surfaces based on changes in the relative phase of the emitted optical radiation and the received optical radiation scattered by the surfaces.” [0252] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical hand position on a patient intervention site. This is what Verard teaches (“a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 23. With reference to claim 17, Lang teaches obtaining surgeon hand positional information further comprises obtaining surgical tool positional information. (“3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305]) 24. With reference to claim 18, Lang teaches applying the surgeon hand positional information to the patient intervention site dataset further comprises applying the surgical tool positional information to the patient intervention site dataset (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical hand position on a patient intervention site comprises assess impact of the surgical tool on the patient intervention site. This is what Verard teaches (“Using this video capability, the object 402 (e.g., a computer aided design rendering, model, scan, etc. for an instrument, medical device, implant, etc.) may be independently manipulated relative to the hologram 124 on the display 158 or in the air. In this way, the object 402 can be placed in or around the hologram 124 to determine whether the object will fit within a portion of the hologram 124, etc. For example, an implant may be placed through a blood vessel to test the fit visually.” [0056] “a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 25. With reference to claim 19, Lang teaches the applying the surgical insert positional information to the patient intervention site dataset (“Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of the surgical tool on the patient intervention site further comprises selecting the surgical tool based upon assessing the impact of the surgical tool. This is what Verard teaches (“Using this video capability, the object 402 (e.g., a computer aided design rendering, model, scan, etc. for an instrument, medical device, implant, etc.) may be independently manipulated relative to the hologram 124 on the display 158 or in the air. In this way, the object 402 can be placed in or around the hologram 124 to determine whether the object will fit within a portion of the hologram 124, etc. For example, an implant may be placed through a blood vessel to test the fit visually.” [0056] “a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. 26. With reference to claim 20, Lang teaches applying the surgeon tool positional information to the patient intervention site dataset comprises calculating a 3-dimensional image of the patient intervention site. (“Calculate 3D transformation of first 3D vascular tree representation using transformation T” [1398] “Any of the 2D or 3D virtual data in any of the embodiments throughout the specification can include an arterial run-off, e.g. a peripheral run-off or an aortic run-off, optionally including bolus chasing. Bolus chasing can permit real-time visualization of the contrast bolus so that it can be followed, for example, peripherally with images, e.g. digital images, being acquired at a suitable frame rate. If the run-off and/or bolus chasing includes movement of the table on which the patient is positioned during the image acquisition, any 2D or 3D virtual data, e.g. from a pre-operative imaging study, e.g. a CTA or MRA or ultrasound, can be moved with the patient and/or table movement to maintain superimposition and/or alignment of a first 2D or 3D virtual data set, e.g. from the pre-operative imaging study, displayed by one or more OHMDs with a second or additional 2D or 3D virtual data sets, e.g. images obtained during the run-off and/or bolus chasing, displayed by the one or more OHMDs or a separate, standalone computer monitor using any of the combinations outlined in exemplary form in Tables 11, 13 and 14. The moving of the 2D or 3D virtual data, e.g. from a first set of virtual data from a pre-operative imaging test such as a CTA, MRA or ultrasound, displayed, using a computer processor, by the one or more OHMDs can be accomplished using the image registration including 3D-2D transformation matrices or 3D-3D registration techniques and/or the known table movement, e.g. in x, y, and/or z-direction, and/or the measured movement of one or more markers applied to the table or the patient. The image registration can include selection and/or matching and/or superimposition and/or alignment of a different volume of interest from a pre-operative vascular imaging study, e.g. an ultrasound, echocardiogram, CTA or MRA, to match the run-off and/or bolus chasing images and to maintain the superimposition and/or alignment of the first virtual dataset displayed by the one or more OHMDs with the second virtual data set, i.e. the run-off and/or bolus chasing images, e.g. displayed by the one or more OHMDs or a standalone or separate computer monitor or display.” [1043] “With guidance in mixed reality environment, e.g. with stereoscopic display like an electronic holographic environment, a virtual surgical guide, tool, instrument or implant can be superimposed onto the surgical site, e.g. an organ or a tumor. Further, the physical guide, tool, instrument or implant can be aligned with the 2D or 3D representation of the virtual surgical guide, tool, instrument or implant.” [0305] “A combination of data can be beneficial for more accurate measurement of changes in position or orientation of the surgeon or interventionalist's head, body, operating arm, hand, or the patient.” [0291] “A processor receives the electrical signals and calculates three-dimensional position information of tissue surfaces based on changes in the relative phase of the emitted optical radiation and the received optical radiation scattered by the surfaces.” [0252] “3D scanning can be used for imaging of the patient and/or the surgical site and/or anatomic landmarks and/or pathologic structures and/or tissues (e.g. damaged or diseased cartilage or exposed subchondral bone) and/or the surgeon or interventionalist's hands and/or fingers and/or the OR table and/or reference areas or points and/or marker, e.g. optical markers, in the operating room and/or on the patient and/or on the surgical field. … One or more optical imaging systems or 3D scanners can, for example, be used to image and/or monitor, e.g. the coordinates, position, orientation, alignment, direction of movement, speed of movement of, Anatomic landmarks, patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. one or more bones or cartilage, and/or one or more OHMD's, e.g. in a common coordinate system The surgeon or interventionalist's hands and/or fingers, e.g. for Monitoring steps in an interventional procedure. Select hand and/or finger movements can be associated with corresponding surgical steps. When the 3D scanner system detects a particular hand and/or finger movement, it can trigger the display of the corresponding surgical step or the next surgical step, e.g. by displaying a predetermined virtual path, e.g. for a catheter, a virtual instrument, a virtual device etc. … One or more OHMDs, e.g. registered in a common coordinate system, e.g. with the surgical site and/or the surgeon or interventionalist's hands and/or fingers.” [0235-0240]) Lang does not explicitly teach assess impact of surgical tool position on a patient intervention site. This is what Verard teaches (“Using this video capability, the object 402 (e.g., a computer aided design rendering, model, scan, etc. for an instrument, medical device, implant, etc.) may be independently manipulated relative to the hologram 124 on the display 158 or in the air. In this way, the object 402 can be placed in or around the hologram 124 to determine whether the object will fit within a portion of the hologram 124, etc. For example, an implant may be placed through a blood vessel to test the fit visually.” [0056] “a fellow/practitioner could first select a program called "HIP" by activating a trigger point 504 to display a 3D CT image of a patient's (subject's) hip, and then select different "HIP IMPLANTS" from different manufacturers to see and "feel" which implant would fit best for the particular patient. It is also possible to use (e.g., physically hold and manipulate) the actual implant in the air and position it within the 3D holographic display to see, feel and assess fit (e.g., if and how well such implant may fit the particular patient).” [0059] “Robotics via a master/slave configuration can be used, where a shape sensed analog 604 of the device 602 moving within the display 158 is employed to actuate the motion of the actual device 602 within a target region 606. A practitioner's (surgeon's, physician's, etc.) hands 610 or voice can be tracked by sensor-based and/or voice-based techniques, such as by, e.g., tracking a physician's hands using a shape-sensing device 608 and shape sensing system 614 in the 3D holographic display. Accordingly, a practitioner's movements (including, e.g., (re)positioning, orientation, etc. of their hands) performed in the holographic display 158 can be transmitted to the device 602, such as a robot 612 (e.g., robotically controlled instruments) inside the patient to replicate such movements within the patient's body, and thereby perform the actual surgery, procedure or task inside the patient's body. Thus, a surgeon can see, touch and feel a 3D holographic display of an organ, perform a procedure thereon (i.e., within the 3D holographic display), causing such procedure to be performed inside of a patient on the actual organ via, or simply to move instruments, e.g., robotically controlled instruments.” [0062]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Verard into Lang, in order to improve accuracy and effectiveness in medical applications. Conclusion 27. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Chin whose telephone number is (571)270-3697. The examiner can normally be reached on Monday-Friday 8:00 AM-4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE CHIN/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Oct 16, 2025
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602870
COMPUTER-AIDED TECHNIQUES FOR DESIGNING 3D SURFACES BASED ON GRADIENT SPECIFICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597205
HYBRID GPU-CPU APPROACH FOR MESH GENERATION AND ADAPTIVE MESH REFINEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12592041
MIXED SHEET EXTENSION
2y 5m to grant Granted Mar 31, 2026
Patent 12586287
Method of Operating Shared GPU Resource and a Shared GPU Device
2y 5m to grant Granted Mar 24, 2026
Patent 12579700
METHODS OF IMPERSONATION IN STREAMING MEDIA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
97%
With Interview (+11.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 634 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month