DETAILED ACTION
Notice of Pre-AIA or AIA Status
Claims 1-20 are pending in this application. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-2 and 7-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gilles et al. (US Patent US 2021/0263468A1), hereby referred to as “Gilles”, in view of Steines et al. (US PGPub US 2022/0287676A1), hereby referred to as “”.
Consider Claims 1, 16 and 20.
Gilles teaches:
1. A method of registering a coordinate frame of a head mounted display (HMD) with a second coordinate frame, the method comprising: / 16. An apparatus comprising: a processor; a memory storing instructions that, when executed by the processor, perform a method comprising: / 20. A head mounted display (HMD) comprising: a processor; a memory storing instructions that, when executed by the processor, perform a method comprising: (Gilles: abstract, Disclosed is a method for generating a digital hologram representing a 3D scene including an object, an object being defined by points and their associated intensity. For each object, a prior step of calculating an “omnidirectional” angular spectrum of the light field emitted by an object in the scene on the surface of a geometric solid centered on the object, a surface of the solid being sampled according to a predetermined grid, a sample of the grid being associated with a vector frequency; for the scene, the following steps: —obtaining a pose of an observer in the world frame of reference; —deriving the hologram from the scene as a function of the pose obtained from the “multidirectional” angular spectra calculated for each object. The step of calculating an angular spectrum of the light field for each object of the scene takes into account predetermined viewing directions. [0075] In relation with FIG. 4, it will be described a method for generating a hologram of a 3D scene comprising several objects Ob1, Ob2, . . . , ObN with N a non-null integer, according to an embodiment of the invention. [0076] In the following, a three-dimensional (3D) world reference frame Rm is considered, in which a display plane of a hologram of a 3D scene and an observer U of the hologram can be located. [0077])
1. receiving sensor data depicting a wearer of the HMD pointing at, gazing at or touching a real world feature, where a three dimensional (3D) position of the real world feature in the second coordinate frame is known; / 16. requesting a wearer of an HMD to point at, or gaze at, or touch, a real world feature; receiving sensor data captured by the HMD and depicting the wearer of the HMD pointing at, gazing at or touching the real world feature, where a 3D position of the real world feature in a second coordinate frame is known; / 20. requesting a wearer of the HMD to point at, or gaze at, or touch, a real world feature; using the HMD, capturing sensor data depicting the wearer of the HMD pointing at, gazing at or touching the real world feature, where the 3D position of the real world feature in the second coordinate frame is known; (Gillies: [0078]-[0081] The omnidirectional angular spectrum corresponds to the plane-wave decomposition of the light field emitted by each object Ob1, Ob2 . . . ObN of the scene Sc. Each plane wave is represented by a 3D frequency coordinate f=(fx,fy,fz) corresponding to its direction of propagation, and by its complex amplitude. During this pre-calculation step, the occultations of the scene are taken into account in an approximative way, i.e. for a limited number of points of view, corresponding to a number of viewing directions. [0082] This step E1 will be described in more detail in relation with FIG. 6; a step E2 of obtaining a pose of an observer U. The pose generally comprises 6 parameters, 3 angles of rotation and 3 parameters of translation. The observer U is for example a user of a near-eye holographic display system, such as augmented reality glasses or a head-mounted display HMD. This kind of device is generally associated with a virtual reality system comprising a module for tracking the helmet position. This system can be based, for example, on a reflective ball fastened on the helmet, this ball being locatable using an infra-red technology, or also on an inertial and magnetic sensor. The position and orientation tracking information is transmitted to the helmet or to a device of the virtual reality system adapted to implement the method according to the invention. [0110])
1. computing a 3D position of the real world feature, in the coordinate frame of the HMD, from the sensor data; / 16. computing a 3D position of the real world feature, in the coordinate frame of the HMD, from the received sensor data; / 20. computing a 3D position of the real world feature, in the coordinate frame of the HMD, from the captured sensor data; (Gilles: [0032] According to an aspect of the invention, the step of determining the sub-set of non-occulted points comprises the following sub-steps: 2D+Z rendering the object by projecting the points of the object along the viewing direction and forming an intensity image and a depth map of the projected points; and inversely projecting the points of the formed intensity image to points of the object in a reference frame of the object and calculating the coordinates thereof as a function of the depth map. [0110] Steps E0 and E1 are repeated for each object Obi of the scene. [0111] It is supposed that, at E2, a pose of the observer U is obtained, from which a position of the plane PH of the hologram to be generated is derived, as described hereinabove. [0112] In relation with FIG. 10, the step E3 of deriving the hologram from the plurality of pre-calculated angular spectra SAi will now be described in detail.)
1. storing a correspondence comprising: the 3D position of the real world feature in the coordinate frame of the HMD, and a 3D position of the real world feature in the second coordinate frame; / 16. storing a correspondence comprising: the 3D position of the real world feature in the coordinate frame of the HMD, and a 3D position of the real world feature in the second coordinate frame; / 20. storing a correspondence comprising: the 3D position of the real world feature in the coordinate frame of the HMD, and a 3D position of the real world feature in the second coordinate frame; (Gilles: [0113] Let H be the hologram to be calculated. Its resolution is given by (Nx,Ny), and the size of its pixels is p. Typically, the holographic screens (SLMs) currently available, of the Spatial Light Modulator or SLM type, have a maximum resolution of the order of (3840,2160) and a minimum pixel size of the order of 3.74 μm. [0114] In relation with FIG. 11, let's note Figure US20210263468A1-20210826-P00001 h=(O; {right arrow over (xh)},{right arrow over (yh)},{right arrow over (zh)}), the local reference frame of the hologram, whose origin is located at the center of the hologram H, whose axes defined by {right arrow over (xh)} and {right arrow over (yh)} coincide with the horizontal and vertical axes of the hologram, respectively, and whose axis defined by {right arrow over (zh)} coincide with the optical axis of the hologram. Let's note
PNG
media_image1.png
74
396
media_image1.png
Greyscale
)
1. repeating the receiving, computing and storing for a second real world feature so that a second correspondence is stored; / 16. repeating the requesting, receiving, computing and storing for a second real world feature so that a second correspondence is stored; / 20. repeating the requesting, using, computing and storing for a second real world feature so that a second correspondence is stored; (Gillies: [0106] The three sub-steps E10, E11, E12 are repeated for each angular direction Vj, j∈{1 . . . . M}. At E13, the omnidirectional angular spectrum SAi of the object Obi is then calculated by accumulating the amplitudes calculated for all the views vj with j∈{1 . . . M}, as follows: [0115] Finally, let's note th=(x0,y0,z0), the position of the center of the hologram in the world reference frame. The step of deriving the hologram H from the angular spectrum is decomposed into 2 sub-steps: Sub-step E31: It consists in constructing the angular spectrum of the hologram Ĥ=Figure US20210263468A1-20210826-P00004{H}, from angular spectra SAi pre-calculated for the N objects Obi of the scene. [0117] The angular spectrum of the hologram is sampled on a regular grid of resolution (Nx,Ny), with a sampling pitch of (Nxp)−1 and (Nyp)−1, in the horizontal and vertical directions, respectively. Hence, the frequencies fhx, fhy of the hologram are comprised between −½p and ½p. [0118] The angular spectrum of the hologram is given by the following formula:
PNG
media_image2.png
29
248
media_image2.png
Greyscale
)
1. registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences. / 16 . registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences, a gravity direction and a scale. / 20. registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences, a gravity direction and a scale. (Gillies: [0088] At E11, for the viewing direction Vj, a set of points of the object Obi that are not occulted is determined. For that purpose, according to this embodiment, a 2D+Z rendering in perspective or orthographic projection of the object i is performed at E111, as illustrated by FIG. 7. For that purpose, it is made use of the 3×4 homogeneous coordinate projection matrix given by
PNG
media_image3.png
338
420
media_image3.png
Greyscale
[0089] At the end of this rendering, we obtain an intensity image Ii,j and a depth map Di,j associated with the view vj.[0090] To calculate the occultations in the viewing direction Vj, it is for example made use of a technique of the painter algorithm or also Z-buffer type, known from the person skilled in the art. For an object Obi of the scene, the depth of a pixel (Z coordinate) is stored in a buffer (Z-buffer), which herein corresponds to the depth map Di,j. This map is a two-dimensional image (X and Y), each element corresponding to a pixel of the intensity image Ii,j of the object Obi for the view vj. If another element of the object must be displayed at the same coordinates (X, Y), the two depths (Z) are compared to each other, and the pixel that is the nearest of the camera is held. The value Z of this pixel is then placed in the depth map, hence replacing the old one. Finally, the drawn image reproduces the perception of the usual and logical depth, the nearest object hiding the farthest ones.)
Even if Gilles does not teach: 1/16/20. registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences.
Steines teaches:
1. A method of registering a coordinate frame of a head mounted display (HMD) with a second coordinate frame, the method comprising: / 16. An apparatus comprising: a processor; a memory storing instructions that, when executed by the processor, perform a method comprising: / 20. A head mounted display (HMD) comprising: a processor; a memory storing instructions that, when executed by the processor, perform a method comprising: (Steines: abstract, Systems, devices and methods for augmented reality guidance of surgical procedures using multiple head mounted displays or other augmented reality display devices are described. In addition, systems, devices and methods using head mounted displays or other augmented reality display systems for operating surgical robots and/or imaging systems are described.)
1. receiving sensor data depicting a wearer of the HMD pointing at, gazing at or touching a real world feature, where a three dimensional (3D) position of the real world feature in the second coordinate frame is known; / 16. requesting a wearer of an HMD to point at, or gaze at, or touch, a real world feature; receiving sensor data captured by the HMD and depicting the wearer of the HMD pointing at, gazing at or touching the real world feature, where a 3D position of the real world feature in a second coordinate frame is known; / 20. requesting a wearer of the HMD to point at, or gaze at, or touch, a real world feature; using the HMD, capturing sensor data depicting the wearer of the HMD pointing at, gazing at or touching the real world feature, where the 3D position of the real world feature in the second coordinate frame is known; (Steines: [0346] In some embodiments, one or more HMDs or other augmented reality display systems and one or more physical tools or physical instruments, e.g. a pointer, a stylus, other tools, other instruments, can be tracked, e.g. using inside-out or outside-in tracking. The coordinates, position and/or orientation of a virtual display comprising a virtual interface displayed by one or more HMDs or other augmented reality display systems can also be tracked. One or more computer processors can be configured, using one or more collision detection modules, to detect collisions between a gaze (e.g. using gaze tracking, gaze lock), a finger (e.g. using finger/hand tracking), a hand (e.g. using hand tracking), an eye (e.g. using eye tracking), one or more tracked physical tools or physical instruments, e.g. a tracked pointer, a tracked stylus, other tracked physical tools, other tracked physical instruments, or a combination thereof and a virtual display comprising the virtual interface, e.g. one or more virtual objects such as virtual button, virtual field, virtual cursor, virtual pointer, virtual slider, virtual trackball, virtual node, virtual numeric display, virtual touchpad, virtual keyboard, or a combination thereof. [0350], [0362] The tracking system 1300 can be configured to track 1410 optionally a hand and/or finger 1380, a pointer, tool, instrument and/or implant 1390, optionally a first 1360, second 1370, third, fourth, fifth etc. head mounted display, a patient (not shown), and/or a robot 1340 and/or an imaging system 1350 or a combination thereof, for example with use of the first computer system CS # 1 1310 and/or optionally with use of a second computer system CS # 2 1320, a third computer CS # 3 1330, and/or additional computer systems. [0363] The tracking system 1300 can be configured to track 1410 optionally a hand and/or finger 1380, a pointer, tool, instrument and/or implant 1390, optionally a first 1360, second 1370, third, fourth, fifth etc. head mounted display, a patient (not shown), and/or a robot 1340 and/or an imaging system 1350 or a combination thereof, and to transmit the tracking information 1415 to a first computer system CS # 1 1310 and/or optionally a second computer system CS # 2 1320, a third computer CS # 3 1330, and/or additional computer systems. )
1. computing a 3D position of the real world feature, in the coordinate frame of the HMD, from the sensor data; / 16. computing a 3D position of the real world feature, in the coordinate frame of the HMD, from the received sensor data; / 20. computing a 3D position of the real world feature, in the coordinate frame of the HMD, from the captured sensor data; (Steines: [0392] The tracking 1410 can comprise recording one or more coordinates of and/or tracking a hand and/or finger 1380, a pointer, tool, instrument, and/or implant 1390, and/or a patient (not shown) by a first computer system CS # 1 1510, and/or a second computer CS # 2 1520, and/or additional computer systems, e.g. in a robot 1530 or imaging system 1540, for example using the camera(s) or scanner(s) 1300. The tracking 1410 can comprise recording one or more coordinates of a hand and/or finger 1380, e.g. of a user, a pointer, tool, instrument, and/or implant 1390, and/or the patient (not shown) by the first computer system CS # 1 1510, and/or the second computer CS # 2 1520, and/or additional computer systems, e.g. in a robot 1530 or imaging system 1540, for example using the camera or scanner 1300. The system can track a hand and/or finger 1380, a pointer, tool, instrument, and/or implant 1390, and/or the patient (not shown) by the first computer system CS # 1 1510, and/or the second computer CS # 2 1520, and/or additional computer systems, e.g. in a robot 1530 or imaging system 1540, for example using the tracking system with, for example, a camera or scanner 1300. [0393], [0394] The transmission can comprise positional data, display data and/or other data. The transmission 1430 can be to or from a first computer system 1510 to or from a second computer system 1520, to or from a third computer system, e.g. part of or coupled to a robot 1530, to or from a fourth, fifth or more computer system. The transmission can be to or from a second computer system 1520 to or from a first computer system 1510, to or from a third computer system 1530, to or from a fourth, fifth or more computer system. The transmission can be to or from a third computer system 1530 to or from a first computer system 1510, to or from a second computer system 1520, to or from a fourth, fifth or more computer system, and so forth. [0399])
1. storing a correspondence comprising: the 3D position of the real world feature in the coordinate frame of the HMD, and a 3D position of the real world feature in the second coordinate frame; / 16. storing a correspondence comprising: the 3D position of the real world feature in the coordinate frame of the HMD, and a 3D position of the real world feature in the second coordinate frame; / 20. storing a correspondence comprising: the 3D position of the real world feature in the coordinate frame of the HMD, and a 3D position of the real world feature in the second coordinate frame; (Steines: [0399] One or more computer systems, e.g. a first 1510, second 1520, or additional computer systems, e.g. also in a robot 1530 and/or an imaging system 1540, can be configured to generate and/or transmit, optionally wirelessly, one or more event message 1450. The one or more event message can, for example, be an event message relative to a collision detection 1445 between a hand or finger 1380, a tracked physical pointer, physical tool, physical instrument, physical implant component 1390 or a combination thereof and a virtual interface, e.g. detected using the tracking system 1300. The one or more event message 1450 can optionally comprise an event message related to a robot 1530 control, interface, force, and/or position and/or orientation and/or an imaging system 1540 control, interface, force, and/or position and/or orientation. [0400] One or more computer systems, e.g. a first 1510, second 1520, third, fourth, fifth etc. or additional computer systems, e.g. also in a robot 1530 and/or an imaging system 1540, can comprise an optional event handler 1460 configured to handle, manage and/or process one or more optional event message 1450.)
1. repeating the receiving, computing and storing for a second real world feature so that a second correspondence is stored; / 16. repeating the requesting, receiving, computing and storing for a second real world feature so that a second correspondence is stored; / 20. repeating the requesting, using, computing and storing for a second real world feature so that a second correspondence is stored; (Steines: [0408] In some embodiments, the second computing system can be configured to wirelessly transmit the real-time tracking information of the component of the robot, the end effector, a target object, a target anatomic structure of a patient, the at least one head mounted display, a physical tool, a physical instrument, a physical implant, a physical object, or any combination thereof. [0409] In some embodiments, a second computing system can be configured for displaying, by the at least one head mounted display, a 3D stereoscopic view. In some embodiments, the 3D stereoscopic view can be superimposed onto an anatomic structure of a patient. In some embodiments, the 3D stereoscopic view can comprise a predetermined trajectory of the end effector, a representation of a predetermined operating boundary of the end effector, a representation of a predetermined operating range of the end effector, a representation of a predetermined operating zone of the end effector, a representation of a predetermined operating volume of the end effector, or a combination thereof. In some embodiments, the 3D stereoscopic view can comprise a predetermined trajectory of the end effector, a representation of a predetermined operating boundary of the end effector, a representation of a predetermined operating range of the end effector, a representation of a predetermined operating zone of the end effector, a representation of a predetermined operating volume of the end effector or a combination thereof following the movement, activation, operation, de-activation or combination thereof of the robot component, robot motor, robot actuator, robot drive, robot controller, robot hydraulic system, robot piezoelectric system, robot switch, the end effector or any combination thereof. [0410] In some embodiments, a first computing system, a second computing system, or both can be configured to turn on or turn off the display of the virtual user interface. In some embodiments, a wireless transmission can comprise a Bluetooth signal, WiFi signal, LiFi signal, a radiofrequency signal, a microwave signal, an ultrasound signal, an infrared signal, an electromagnetic wave or any combination thereof. [0411] In some embodiments, a 3D stereoscopic view can comprise a predetermined trajectory of an end effector, a representation of a predetermined operating boundary of the end effector, a representation of a predetermined operating range of the end effector, a representation of a predetermined operating zone of the end effector, a representation of a predetermined operating volume of the end effector or a combination thereof prior to executing a command. [0414])
1. registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences. / 16 . registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences, a gravity direction and a scale. / 20. registering the coordinate frame of the HMD and the second coordinate frame by computing registration information mapping between the coordinate frame of the HMD and the second coordinate frame, the registration information computed from the correspondences, a gravity direction and a scale.(Steines: [0096] FIG. 11 illustrates a non-limiting example of registering a digital hologram for an initial surgical step, performing the surgical step and re-registering one or more digital holograms for subsequent surgical steps according to some embodiments of the present disclosure. EXAMPLE 1 – USE OF MULTIPLE HEAD MOUNTED DISPLAYS [0476] In some embodiments, multiple head mounted displays can be used. Head mounted displays (HMD) can be video see-through head mounted displays or optical see-through head mounted displays. Referring to FIG. 9, a system 10 for using multiple HMDs 11, 12, 13, 14 or other augmented reality display systems for multiple viewer's, e.g. a primary surgeon, second surgeon, surgical assistant(s) and/or nurses(s) is shown; video see-through head mounted displays could also be used in any of the embodiments. The multiple HMDs or other augmented reality display systems can be registered in a common coordinate system 15 using anatomic structures, anatomic landmarks, calibration phantoms, reference phantoms, optical markers, navigation markers, and/or spatial anchors, for example like the spatial anchors used by the Microsoft Hololens. Pre-operative data 16 of the patient can also be registered in the common coordinate system 15. Live data 18 of the patient, for example from the surgical site, e.g. a spine, optionally with minimally invasive access, a hip arthrotomy site, a knee arthrotomy site, a bone cut, an altered surface can be measured, for example using one or more IMU's, optical markers, navigation markers, image or video capture systems and/or spatial anchors. The live data 18 of the patient can be registered in the common coordinate system 15. Intra-operative imaging studies 20 can be registered in the common coordinate system 15. OR references, e.g. an OR table or room fixtures can be registered in the common coordinate system 15 using, for example, optical markers IMU's, navigation markers or spatial mapping 22. The pre-operative data 16 or live data 18 including intra-operative measurements or combinations thereof can be used to develop, generate or modify a virtual surgical plan 24. The virtual surgical plan 24 can be registered in the common coordinate system 15. The HMDs 11, 12, 13, 14 or other augmented reality display systems can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30. Using a virtual or other interface, the surgeon wearing HMD 1 11 can execute commands 32, e.g. to display the next predetermined bone cut, e.g. from a virtual surgical plan or an imaging study or intra-operative measurements, which can trigger the HMDs 11, 12, 13, 14 to project digital holograms of the next surgical step 34 superimposed onto and aligned with the surgical site in a predetermined position and/or orientation. [0477] Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual HMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each HMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body.)
It would have been obvious before the effective filing date of the claimed invention was made to one of ordinary skill in the art to modify Gilles method and system for image augmentation using a digital hologram representing a 3D scene including an object with the augmented reality guidance algorithms of Steines. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify Gilles head mounted display for hologram generation of 3D scenes to leverage the augmented reality guidance of Steines, as they are both directed towards the same field of endeavor, and the improvement would enhance overall accuracy in the digital representation. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and programming techniques, without changing a “fundamental” operating principle of Gilles, while the teaching of Steines continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of accurate augmented reality representation in head mounted displays. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Consider Claim 6.
The combination of Gilles and Steines teaches:
6. The method of claim 1 wherein the second coordinate frame is a coordinate frame of a second HMD, and wherein the real world feature is a fingertip of a wearer of the second HMD and the 3D position of the fingertip in the second coordinate frame is known from a tracking function of the second HMD, and comprising requesting the wearer of the HMD to touch the fingertip of the second HMD wearer; and wherein a gravity direction is known from the first HMD and from the second HMD. (Steines: [0341] For example, a tracked finger, tracked hand, tracked pointer, tracked instrument, tracked tool can interact with the virtual interface. The interaction can trigger an event message, optionally managed by an event handler, and/or a command. The interaction, event message, command or combination thereof can optionally be transmitted and/or received between a first and a second computing system, for example a first computing system (e.g. communicably connected to a navigation system, a robot, and/or an imaging system) and a second (or more) computing system(s) (communicably connected to one or more HMDs or other augmented reality display devices. [0346] In some embodiments, one or more HMDs or other augmented reality display systems and one or more physical tools or physical instruments, e.g. a pointer, a stylus, other tools, other instruments, can be tracked, e.g. using inside-out or outside-in tracking. The coordinates, position and/or orientation of a virtual display comprising a virtual interface displayed by one or more HMDs or other augmented reality display systems can also be tracked. One or more computer processors can be configured, using one or more collision detection modules, to detect collisions between a gaze (e.g. using gaze tracking, gaze lock), a finger (e.g. using finger/hand tracking), a hand (e.g. using hand tracking), an eye (e.g. using eye tracking), one or more tracked physical tools or physical instruments, e.g. a tracked pointer, a tracked stylus, other tracked physical tools, other tracked physical instruments, or a combination thereof and a virtual display comprising the virtual interface, e.g. one or more virtual objects such as virtual button, virtual field, virtual cursor, virtual pointer, virtual slider, virtual trackball, virtual node, virtual numeric display, virtual touchpad, virtual keyboard, or a combination thereof.)
Consider Claim 7.
The combination of Gilles and Steines teaches:
7. The method of claim 1 wherein the real world feature is an edge of a surface in an environment of the HMD and comprising requesting the wearer of the HMD to touch an edge of a real world surface in the environment with their fingertip and to move their fingertip along the edge such that the received sensor data depicts the fingertip moving along the edge and such that registration between a coordinate frame of the HMD wearer and a coordinate frame of the surface is computed. (Steines: [0392] The tracking 1410 can comprise recording one or more coordinates of and/or tracking a hand and/or finger 1380, a pointer, tool, instrument, and/or implant 1390, and/or a patient (not shown) by a first computer system CS # 1 1510, and/or a second computer CS # 2 1520, and/or additional computer systems, e.g. in a robot 1530 or imaging system 1540, for example using the camera(s) or scanner(s) 1300. The tracking 1410 can comprise recording one or more coordinates of a hand and/or finger 1380, e.g. of a user, a pointer, tool, instrument, and/or implant 1390, and/or the patient (not shown) by the first computer system CS # 1 1510, and/or the second computer CS # 2 1520, and/or additional computer systems, e.g. in a robot 1530 or imaging system 1540, for example using the camera or scanner 1300. The system can track a hand and/or finger 1380, a pointer, tool, instrument, and/or implant 1390, and/or the patient (not shown) by the first computer system CS # 1 1510, and/or the second computer CS # 2 1520, and/or additional computer systems, e.g. in a robot 1530 or imaging system 1540, for example using the tracking system with, for example, a camera or scanner 1300. [0393], [0394] The transmission can comprise positional data, display data and/or other data. The transmission 1430 can be to or from a first computer system 1510 to or from a second computer system 1520, to or from a third computer system, e.g. part of or coupled to a robot 1530, to or from a fourth, fifth or more computer system. The transmission can be to or from a second computer system 1520 to or from a first computer system 1510, to or from a third computer system 1530, to or from a fourth, fifth or more computer system. The transmission can be to or from a third computer system 1530 to or from a first computer system 1510, to or from a second computer system 1520, to or from a fourth, fifth or more computer system, and so forth. [0399] One or more computer systems, e.g. a first 1510, second 1520, or additional computer systems, e.g. also in a robot 1530 and/or an imaging system 1540, can be configured to generate and/or transmit, optionally wirelessly, one or more event message 1450. The one or more event message can, for example, be an event message relative to a collision detection 1445 between a hand or finger 1380, a tracked physical pointer, physical tool, physical instrument, physical implant component 1390 or a combination thereof and a virtual interface, e.g. detected using the tracking system 1300. The one or more event message 1450 can optionally comprise an event message related to a robot 1530 control, interface, force, and/or position and/or orientation and/or an imaging system 1540 control, interface, force, and/or position and/or orientation. [0400] One or more computer systems, e.g. a first 1510, second 1520, third, fourth, fifth etc. or additional computer systems, e.g. also in a robot 1530 and/or an imaging system 1540, can comprise an optional event handler 1460 configured to handle, manage and/or process one or more optional event message 1450.)
Consider Claim 8.
The combination of Gilles and Steines teaches:
8. The method of claim 7 repeated for each of a plurality of HMD wearers in the same environment, and comprising sharing a hologram between the HMD wearers, using the registration information, such that the hologram appears in the same location with respect to the surface to each HMD wearer and appropriate for viewpoints of each of the HMDs which are different. (Steines: [0096] FIG. 11 illustrates a non-limiting example of registering a digital hologram for an initial surgical step, performing the surgical step and re-registering one or more digital holograms for subsequent surgical steps according to some embodiments of the present disclosure. EXAMPLE 1 – USE OF MULTIPLE HEAD MOUNTED DISPLAYS [0476] The HMDs 11, 12, 13, 14 or other augmented reality display systems can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30. Using a virtual or other interface, the surgeon wearing HMD 1 11 can execute commands 32, e.g. to display the next predetermined bone cut, e.g. from a virtual surgical plan or an imaging study or intra-operative measurements, which can trigger the HMDs 11, 12, 13, 14 to project digital holograms of the next surgical step 34 superimposed onto and aligned with the surgical site in a predetermined position and/or orientation.)
Consider Claim 9.
The combination of Gilles and Steines teaches:
9. The method of claim 1 comprising receiving, from an mixed reality service an accuracy level and, in response to the accuracy level being below a threshold, repeating the method of claim 1 for a different real world feature. (Steines: [0207]-[0215], [0214] The display data can comprise targeting displays, e.g. target disk like, magnified displays, color labeled displays (e.g. red, yellow, green indicating position or alignment accuracy or thresholds), 3D models of the patient, image slices, 3D models or 3D or 2D graphical representations of tools or instruments (e.g. tracked physical tools or instruments) alphanumeric displays, and/or an AR or VR enabled graphical user interface (e.g. gesture recognition, using gesture recognition, virtual pointers, a virtual mouse, a virtual keyboard, virtual buttons, gaze recognition, gaze lock or a combination thereof). An AR or VR visualization module can comprise a display of a virtual user interface and/or a display of one or more virtual interactions, e.g. collisions, with a virtual user interface (e.g. with a tracked physical tool or instrument). The AR or VR visualization and/or display module and/or an interface can be configured for communication with other operating modules, e.g. integrated with the first computing system or the second computing system. [0214] An AR or VR display module 1160. One or more computer processors can be configured for generating one or more 3D stereoscopic views, for example at a rate similar to the data transmission or reception and/or at a different rate. The 3D stereoscopic view can, for example, be adjusted for user specific characteristics (e.g. interpupillary distance, distance from pupil to mirror etc.). The 3D stereoscopic view can, for example, be adjusted for the distance from the pupil or the display to the patient, e.g. a surgical site, for example, in a spine, knee, hip, organ, vessel etc.; adjustments can comprise adjustments of focal plane or point, adjustments of scale or magnification of virtual displays and display items, adjustments of convergence. Adjustments can be in real-time or near real-time. Adjustments can be at less than real-time. An AR or VR display module can comprise a display of a virtual user interface and/or a display of one or more virtual interactions, e.g. collisions, with a virtual user interface (e.g. with a tracked physical tool or instrument). [0215] One or more of the modules 1100-1160 can be integrated or combined. For example, the AR visualization module 1150 can be integrated or combined with the AR display module 1160. [0216] One or more of the modules 1100-1160 can be run by the same computer processor or the same group of computer processors. One or more of the modules 1100-1160 can be run by different computer processors.)
Consider Claims 10 and 18.
The combination of Gilles and Steines teaches:
10. The method of claim 1 comprising using the registration information in an mixed reality service and, in response to an accuracy level of the registration information being below a threshold, using user input data received by the mixed reality service to obtain another correspondence. / 18. The apparatus of claim 16 wherein the registration information comprises an orientation and a 3D translation and the apparatus is configured to send the registration information to an mixed reality service. (Steines: [0113] The augmented reality display can comprise a composite or mixed reality or augmented reality display of the video feed and virtual devices or virtual objects, e.g. a virtual end effector or a 3D representation of an x-ray beam or intended image acquisition. Any virtual devices, virtual surgical guide, virtual tool or instrument, or virtual object known in the art or described in the specification can be co-displayed with the video feed or video images. The virtual devices, virtual surgical guide, virtual tool or instrument, or virtual object known in the art or described in the specification can be displayed in conjunction with the video feed and can optionally be registered with the physical objects, devices (e.g. an imaging system or a surgical robot) or a physical patient or target anatomic structure included in the video feed or video images. Any of the registration techniques described in the specification or known in the art can be used. The terms mixed reality and augmented reality as used throughout the disclosure can be used interchangeably. In any of the illustrations, the term HMD (i.e. head mounted display) can be used interchangeably with an augmented reality display device, mixed reality display device, e.g. a tablet or smart phone. In some embodiments, the terms head mounted display, HMD, augmented reality display device, mixed reality display device can be used interchangeably. Gilles: [0082] The position and orientation tracking information is transmitted to the helmet or to a device of the virtual reality system adapted to implement the method according to the invention. [0083] Based on the knowledge of this pose, a position PH of the hologram plane in the world reference frame Rm is immediately derived, which is linked to a viewing window of the HMD device; a step E3 of deriving the hologram on the fly from the omnidirectional angular spectra pre-calculated for each object of the scene and from the pose obtained. The time of calculation of this step depends only on the resolution of the hologram and not on the complexity or the sampling of the scene, which makes it possible to maintain a constant framerate while ensuring a good visual quality. This step will be detailed hereinafter in relation with FIG. 10.)
Consider Claim 11.
The combination of Gilles and Steines teaches:
11. The method of claim 1 comprising giving a picture, a 3D scan, or a textual description of the real world feature to the HMD wearer to enable the HMD wearer to reliably identify the real world feature. (Steines: [0207]-[0215], [0214] The display data can comprise targeting displays, e.g. target disk like, magnified displays, color labeled displays (e.g. red, yellow, green indicating position or alignment accuracy or thresholds), 3D models of the patient, image slices, 3D models or 3D or 2D graphical representations of tools or instruments (e.g. tracked physical tools or instruments) alphanumeric displays, and/or an AR or VR enabled graphical user interface (e.g. gesture recognition, using gesture recognition, virtual pointers, a virtual mouse, a virtual keyboard, virtual buttons, gaze recognition, gaze lock or a combination thereof). An AR or VR visualization module can comprise a display of a virtual user interface and/or a display of one or more virtual interactions, e.g. collisions, with a virtual user interface (e.g. with a tracked physical tool or instrument). The AR or VR visualization and/or display module and/or an interface can be configured for communication with other operating modules, e.g. integrated with the first computing system or the second computing system. [0214] An AR or VR display module 1160. One or more computer processors can be configured for generating one or more 3D stereoscopic views, for example at a rate similar to the data transmission or reception and/or at a different rate. The 3D stereoscopic view can, for example, be adjusted for user specific characteristics (e.g. interpupillary distance, distance from pupil to mirror etc.). The 3D stereoscopic view can, for example, be adjusted for the distance from the pupil or the display to the patient, e.g. a surgical site, for example, in a spine, knee, hip, organ, vessel etc.; adjustments can comprise adjustments of focal plane or point, adjustments of scale or magnification of virtual displays and display items, adjustments of convergence. Adjustments can be in real-time or near real-time. Adjustments can be at less than real-time. An AR or VR display module can comprise a display of a virtual user interface and/or a display of one or more virtual interactions, e.g. collisions, with a virtual user interface (e.g. with a tracked physical tool or instrument). [0215] One or more of the modules 1100-1160 can be integrated or combined. For example, the AR visualization module 1150 can be integrated or combined with the AR display module 1160. [0216] One or more of the modules 1100-1160 can be run by the same computer processor or the same group of computer processors. One or more of the modules 1100-1160 can be run by different computer processors.)
Consider Claim 12.
The combination of Gilles and Steines teaches:
12. The method of claim 1 where the wearer of the HMD touches the real world feature and wherein computing the 3D position of the real world feature in the coordinate frame of the HMD, from the received sensor data, comprises tracking a hand of the HMD wearer. (Gillies: [0106] The three sub-steps E10, E11, E12 are repeated for each angular direction Vj, j∈{1 . . . . M}. At E13, the omnidirectional angular spectrum SAi of the object Obi is then calculated by accumulating the amplitudes calculated for all the views vj with j∈{1 . . . M}, as follows: [0115] Finally, let's note th=(x0,y0,z0), the position of the center of the hologram in the world reference frame. The step of deriving the hologram H from the angular spectrum is decomposed into 2 sub-steps: Sub-step E31: It consists in constructing the angular spectrum of the hologram Ĥ=Figure US20210263468A1-20210826-P00004{H}, from angular spectra SAi pre-calculated for the N objects Obi of the scene. [0117] The angular spectrum of the hologram is sampled on a regular grid of resolution (Nx,Ny), with a sampling pitch of (Nxp)−1 and (Nyp)−1, in the horizontal and vertical directions, respectively. Hence, the frequencies fhx, fhy of the hologram are comprised between −½p and ½p. [0118] The angular spectrum of the hologram is given by the following formula:
PNG
media_image2.png
29
248
media_image2.png
Greyscale
Steines: [0408] In some embodiments, the second computing system can be configured to wirelessly transmit the real-time tracking information of the component of the robot, the end effector, a target object, a target anatomic structure of a patient, the at least one head mounted display, a physical tool, a physical instrument, a physical implant, a physical object, or any combination thereof. [0409] In some embodiments, a second computing system can be configured for displaying, by the at least one head mounted display, a 3D stereoscopic view. In some embodiments, the 3D stereoscopic view can be superimposed onto an anatomic structure of a patient. In some embodiments, the 3D stereoscopic view can comprise a predetermined trajectory of the end effector, a representation of a predetermined operating boundary of the end effector, a representation of a predetermined operating range of the end effector, a representation of a predetermined operating zone of the end effector, a representation of a predetermined operating volume of the end effector, or a combination thereof. In some embodiments, the 3D stereoscopic view can comprise a predetermined trajectory of the end effector, a representation of a predetermined operating boundary of the end effector, a representation of a predetermined operating range of the end effector, a representation of a predetermined operating zone of the end effector, a representation of a predetermined operating volume of the end effector or a combination thereof following the movement, activation, operation, de-activation or combination thereof of the robot component, robot motor, robot actuator, robot drive, robot controller, robot hydraulic system, robot piezoelectric system, robot switch, the end effector or any combination thereof. [0410] In some embodiments, a first computing system, a second computing system, or both can be configured to turn on or turn off the display of the virtual user interface. In some embodiments, a wireless transmission can comprise a Bluetooth signal, WiFi signal, LiFi signal, a radiofrequency signal, a microwave signal, an ultrasound signal, an infrared signal, an electromagnetic wave or any combination thereof. [0411] In some embodiments, a 3D stereoscopic view can comprise a predetermined trajectory of an end effector, a representation of a predetermined operating boundary of the end effector, a representation of a predetermined operating range of the end effector, a representation of a predetermined operating zone of the end effector, a representation of a predetermined operating volume of the end effector or a combination thereof prior to executing a command. [0414])
Consider Claim 13.
The combination of Gilles and Steines teaches:
13. The method of claim 1 where the wearer of the HMD gazes at the real world feature and wherein computing the 3D position of the real world feature in the coordinate frame of the HMD, from the sensor data, comprises using eye tracking functionality in the HMD to determine a ray from the wearer to the real world feature and either: using the ray as a 3D registration constraint; or intersecting the ray with a surface mesh; or determining another ray from a different viewpoint of the HMD and intersecting the ray and the other ray. (Gilles: [0082] The position and orientation tracking information is transmitted to the helmet or to a device of the virtual reality system adapted to implement the method according to the invention. [0083] Based on the knowledge of this pose, a position PH of the hologram plane in the world reference frame Rm is immediately derived, which is linked to a viewing window of the HMD device; a step E3 of deriving the hologram on the fly from the omnidirectional angular spectra pre-calculated for each object of the scene and from the pose obtained. The time of calculation of this step depends only on the resolution of the hologram and not on the complexity or the sampling of the scene, which makes it possible to maintain a constant framerate while ensuring a good visual quality. This step will be detailed hereinafter in relation with FIG. 10. Steines: [0096] FIG. 11 illustrates a non-limiting example of registering a digital hologram for an initial surgical step, performing the surgical step and re-registering one or more digital holograms for subsequent surgical steps according to some embodiments of the present disclosure. EXAMPLE 1 – USE OF MULTIPLE HEAD MOUNTED DISPLAYS [0476] The HMDs 11, 12, 13, 14 or other augmented reality display systems can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30. Using a virtual or other interface, the surgeon wearing HMD 1 11 can execute commands 32, e.g. to display the next predetermined bone cut, e.g. from a virtual surgical plan or an imaging study or intra-operative measurements, which can trigger the HMDs 11, 12, 13, 14 to project digital holograms of the next surgical step 34 superimposed onto and aligned with the surgical site in a predetermined position and/or orientation. Steines: [0346] In some embodiments, one or more HMDs or other augmented reality display systems and one or more physical tools or physical instruments, e.g. a pointer, a stylus, other tools, other instruments, can be tracked, e.g. using inside-out or outside-in tracking. The coordinates, position and/or orientation of a virtual display comprising a virtual interface displayed by one or more HMDs or other augmented reality display systems can also be tracked. One or more computer processors can be configured, using one or more collision detection modules, to detect collisions between a gaze (e.g. using gaze tracking, gaze lock), a finger (e.g. using finger/hand tracking), a hand (e.g. using hand tracking), an eye (e.g. using eye tracking), one or more tracked physical tools or physical instruments, e.g. a tracked pointer, a tracked stylus, other tracked physical tools, other tracked physical instruments, or a combination thereof and a virtual display comprising the virtual interface, e.g. one or more virtual objects such as virtual button, virtual field, virtual cursor, virtual pointer, virtual slider, virtual trackball, virtual node, virtual numeric display, virtual touchpad, virtual keyboard, or a combination thereof.)
Consider Claim 14.
The combination of Gilles and Steines teaches:
14. The method of claim 1 where the wearer of the HMD points an element towards the real world feature and wherein computing the 3D position of the real world feature in the coordinate frame of the HMD, from the sensor data, comprises using tracking functionality to determine a ray from the element to the real world feature and either: using the ray as a 3D registration constraint; or intersecting the ray with a surface mesh; or determining another ray from a different viewpoint of the HMD and intersecting the ray and the other ray. (Steines: [0113] The augmented reality display can comprise a composite or mixed reality or augmented reality display of the video feed and virtual devices or virtual objects, e.g. a virtual end effector or a 3D representation of an x-ray beam or intended image acquisition. Any virtual devices, virtual surgical guide, virtual tool or instrument, or virtual object known in the art or described in the specification can be co-displayed with the video feed or video images. The virtual devices, virtual surgical guide, virtual tool or instrument, or virtual object known in the art or described in the specification can be displayed in conjunction with the video feed and can optionally be registered with the physical objects, devices (e.g. an imaging system or a surgical robot) or a physical patient or target anatomic structure included in the video feed or video images. [0260] In case of multiple clients, different data inputs from the various perspectives (e.g. from a first, second, third, fourth, fifth etc. HMD) can be used by the server to increase the accuracy of the calculations (e.g. by averaging out errors). In some embodiments, coordinate information and/or tracking information, e.g. from spatial maps from two or more HMD clients, can be obtained and processed by the server. For example, spatial maps can consist of triangular meshes built from each HMD's depth sensor information. Once spatial maps have been transferred from a first, second, third, fourth, fifth or combination there of HMD to the server, the different meshes can be combined into a combined, more accurate mesh using, for example, an averaging algorithm: For example, the data from a first HMD can be used as the baseline. From each face in the baseline mesh, a ray can be cast along the surface normal of the face. Intersection points between the ray and all other meshes can be calculated. A new vertex for the combined mesh can be derived as the average of all intersection points along the ray. The new vertices from adjacent triangles in the baseline mesh can be connected to form the faces in the combined mesh. The combined mesh can then be transferred back to the individual HMD's for refinement of the registration, coordinate or tracking information and/or for refinement of the real-time or near real-time updating of the stereoscopic or non-stereoscopic HMD display, e.g. superimposed and/or aligned with an anatomic structure or anatomic landmark of a patient. Gilles: [0082] The position and orientation tracking information is transmitted to the helmet or to a device of the virtual reality system adapted to implement the method according to the invention. [0083] Based on the knowledge of this pose, a position PH of the hologram plane in the world reference frame Rm is immediately derived, which is linked to a viewing window of the HMD device; a step E3 of deriving the hologram on the fly from the omnidirectional angular spectra pre-calculated for each object of the scene and from the pose obtained. The time of calculation of this step depends only on the resolution of the hologram and not on the complexity or the sampling of the scene, which makes it possible to maintain a constant framerate while ensuring a good visual quality. This step will be detailed hereinafter in relation with FIG. 10.)
Consider Claims 15 and 19.
The combination of Gilles and Steines teaches:
15. The method of claim 1 wherein the second coordinate frame is a coordinate frame of an articulated autonomous robot moving in an environment of the HMD and wherein the real world feature is a joint of the robot, and wherein the 3D position of the realworld feature in the second coordinate frame is known from a simultaneous localization and mapping (SLAM) function of the robot. / 19. The apparatus of claim 16 having a communication mechanism to receive the 3D position of the real world feature in the second coordinate frame from another entity selected from: a web service, another HMD, a robot. (Gilles: [0110] Steps E0 and E1 are repeated for each object Obi of the scene. [0111] It is supposed that, at E2, a pose of the observer U is obtained, from which a position of the plane PH of the hologram to be generated is derived, as described hereinabove. [0112] In relation with FIG. 10, the step E3 of deriving the hologram from the plurality of pre-calculated angular spectra SAi will now be described in detail. [0113] Let H be the hologram to be calculated. Its resolution is given by (Nx,Ny), and the size of its pixels is p. Typically, the holographic screens (SLMs) currently available, of the Spatial Light Modulator or SLM type, have a maximum resolution of the order of (3840,2160) and a minimum pixel size of the order of 3.74 μm. [0114] In relation with FIG. 11, let's note Figure US20210263468A1-20210826-P00001 h=(O; {right arrow over (xh)},{right arrow over (yh)},{right arrow over (zh)}), the local reference frame of the hologram, whose origin is located at the center of the hologram H, whose axes defined by {right arrow over (xh)} and {right arrow over (yh)} coincide with the horizontal and vertical axes of the hologram, respectively, and whose axis defined by {right arrow over (zh)} coincide with the optical axis of the hologram. Let's note
PNG
media_image1.png
74
396
media_image1.png
Greyscale
Steines: [0196] When multiple data sets, e.g. different types of data such as instrument tracking data and/or HMD or other augmented reality display system tracking data and/or patient or surgical site (e.g. a spine, joint, tooth or vascular structure) tracking data and/or virtual user interface and/or interaction with virtual user interface data (e.g. with a tracked physical tool or instrument) are transmitted and/or received, they can be transmitted and/or received simultaneously or non-simultaneously. Data sets including any of the data listed in Table 2 can optionally be labelled, e.g. with a time stamp, time point, time interval (e.g. within 1 transmission or data reception, for example, for a rate of 60 Hz, within 16.66 ms or less or, for example, any other value within the time allocated for transmission and reception), a time label, a time tag or any combination thereof. [0197] In some embodiments, coordinate information, registration data, tracking data or a combination thereof of one or more HMDs or other augmented reality display systems can optionally be labeled or coded for each specific HMD or other augmented reality display system, e.g. by a computer processor integrated into, attached to, or connected to a camera or scanner (optionally part of a first or second computing unit), a computer processor integrated into, attached to, or connected to a first computing unit (e.g. in a server), and/or a computer processor integrated into, attached to, or connected to a second computing unit (e.g. in a client, for example integrated into an HMD or other augmented reality display system or connected to an HMD or other augmented reality display system).)
Consider Claim 17.
The combination of Gilles and Steines teaches:
17. The apparatus of claim 16 integral with an HMD. (Gilles: [0082] The position and orientation tracking information is transmitted to the helmet or to a device of the virtual reality system adapted to implement the method according to the invention. [0083] Based on the knowledge of this pose, a position PH of the hologram plane in the world reference frame Rm is immediately derived, which is linked to a viewing window of the HMD device; a step E3 of deriving the hologram on the fly from the omnidirectional angular spectra pre-calculated for each object of the scene and from the pose obtained. The time of calculation of this step depends only on the resolution of the hologram and not on the complexity or the sampling of the scene, which makes it possible to maintain a constant framerate while ensuring a good visual quality. This step will be detailed hereinafter in relation with FIG. 10. Steines: [0096] FIG. 11 illustrates a non-limiting example of registering a digital hologram for an initial surgical step, performing the surgical step and re-registering one or more digital holograms for subsequent surgical steps according to some embodiments of the present disclosure. EXAMPLE 1 – USE OF MULTIPLE HEAD MOUNTED DISPLAYS [0476] The HMDs 11, 12, 13, 14 or other augmented reality display systems can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30. Using a virtual or other interface, the surgeon wearing HMD 1 11 can execute commands 32, e.g. to display the next predetermined bone cut, e.g. from a virtual surgical plan or an imaging study or intra-operative measurements, which can trigger the HMDs 11, 12, 13, 14 to project digital holograms of the next surgical step 34 superimposed onto and aligned with the surgical site in a predetermined position and/or orientation.)
Claims 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Gilles et al. (US Patent US 2021/0263468A1), hereby referred to as “Gilles”, in view of Steines et al. (US PGPub US 2022/0287676A1), hereby referred to as “Steines”, further in view of Lazarow (US PGPub US 2019/0114802) hereby referred to as “Lazarow”.
Consider Claim 2.
The combination of Gilles and Steines teaches “The method of Claim 1.”
The combination of Gilles and Steines does not teach the limitations from Claim 2.
Lazarow teaches:
1. A method of registering a coordinate frame of a head mounted display (HMD) with a second coordinate frame, the method comprising: (Lazarow: abstract, Features of the present disclosure solve the above-identified problem by implementing remote localization techniques that enable coordination between multi-user display devices. Specifically, the remote localization techniques identify user location (or “position”) in the virtual reality (VR)/augmented reality (AR) scene using a key-frame that is subset of available information. Thus, when a first display device is uncertain regarding its position within a VR/AR scene, the first display may generate, for example, a single key-frame that is shared between one or more second display devices such that the receiving display device(s) may locate the key-frame with respect to a spatial anchor within the client's map, identify the user's location within the VR/AR scene, and transmit the location back to the first HMD device that created the key-frame. Based on the location information, the first HMD device may synchronize with the second HMD device within a shared scene or map. [0024] For example, in one implementation, the present disclosure provides a peer-to-peer (P2P) remote localization system including techniques for aligning two or more mixed-reality devices into the same coordinate system by having a device (e.g., a querying device) that is unsure of its location relative to a shared coordinate system issue a query to another device (e.g., a sharing device). The sharing device then uses its knowledge of the environment to localize the query data relative to the shared coordinate system and return that information, e.g., a transform, to the querying device. If a successful localization is found, then the resulting relative transform (e.g., a relative position and/or pose or offset between a point in the shared coordinate system known by the sharing device and the location of a point in the query data of the querying device) can be incorporated into the querying device's environment map. In other words, the querying device can adjust the position of the point in the query data based on the relative transform, thereby aligning its coordinate system with the shared coordinate system. Thus, both devices will then know their location with respect to the shared coordinate system and can render holograms in the same physical location (or do other reasoning).)
2. The method of claim 1 comprising using the registration information for any of: persisting a hologram, sharing a hologram between a plurality of HMD wearers, anchoring a hologram to a real world environment. (Lazarow: [0030] Though not shown in FIG. 1, a processing apparatus 405, memory 410 and other components may be integrated into the HMD device 105 (see FIG. 5). Alternatively, such components may be housed in a separate housing connected to the HMD 105 by wired and/or wireless communication links. For example, the components may be housed in a separate computer device (e.g., smartphone, tablet, laptop or desktop computer, etc.) which communicates with the display device 105. Accordingly, mounted to or inside the HMD 105 may be an image source, such as a micro display for projecting a virtual image onto the optical component 115. As discussed above, the optical component 115 may be a collimating lens through which the micro display projects an image. [0031] In some examples, one or more HMDs 105 may be communicatively paired (via a wired or wireless communication link) in a shared session that includes a shared coordinate system of an environment that allows multiple users to collaborate on the same shared virtual session (e.g., multi-user games and/or multiple users experiencing the same environment). In such situation, a shared virtual session may be initiated by a first user operating a first HMD 105 (also referred to as “sharing device”) that generates and shares a coordinate system of the environment (e.g., map of the hologram object(s) and physical table in the first user's real environment), including, for instance, a “spatial anchor,” to a set of one or more second HMDs 105. A spatial anchor represents point in the coordinate system of the environment that the system should keep track of over time. For instance, in one example, a spatial anchor may be a set of one or more points that identify a position and/or orientation of a real object in the real world environment. Spatial anchors of one HMD 105 may be adjusted, as needed, relative to other anchors or frames of reference of one or more other HMDs 105, in order to ensure that anchored holograms stay precisely in place in a location that is synchronized in the shared coordinate system of the environment shared by the group of HMDs 105 in the shared session. [0032] Rendering a hologram based on a known position and/or orientation of an anchor within the shared coordinate system provides the most accurate positioning for that hologram at any given time in the shared virtual session.)
It would have been obvious before the effective filing date of the claimed invention was made to one of ordinary skill in the art to modify the combination of Gilles and Steines for 3d visualization and image augmentation using a digital hologram to leverage the peer to peer remote localization of Lazarow, as they are both in the overall field of image visualization in HMDs. The determination of obviousness is predicated upon the following findings: One skilled in the art would have been motivated to modify the combination of Gilles and Steins for a head mounted display with augmented reality guidance to leverage peer to peer remote localization to allow multiple devices to synchronize within a shared scene (Lazarow: abstract), and the improvement would enhance overall accuracy in the digital representation. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and programming techniques, without changing a “fundamental” operating principle of the combination of Gilles and Steines, while the teaching of Lazarow continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of accurate augmented reality representation in head mounted displays. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question.
Consider Claim 3.
The combination of Gilles, Steines and Lazarow teaches:
3. The method of claim 2 wherein using the registration information to persist a hologram comprises anchoring a hologram to a real world point in an environment of the HMD; (Lazarow: [0030] Though not shown in FIG. 1, a processing apparatus 405, memory 410 and other components may be integrated into the HMD device 105 (see FIG. 5). Alternatively, such components may be housed in a separate housing connected to the HMD 105 by wired and/or wireless communication links. For example, the components may be housed in a separate computer device (e.g., smartphone, tablet, laptop or desktop computer, etc.) which communicates with the display device 105. Accordingly, mounted to or inside the HMD 105 may be an image source, such as a micro display for projecting a virtual image onto the optical component 115. As discussed above, the optical component 115 may be a collimating lens through which the micro display projects an image. [0031] In some examples, one or more HMDs 105 may be communicatively paired (via a wired or wireless communication link) in a shared session that includes a shared coordinate system of an environment that allows multiple users to collaborate on the same shared virtual session (e.g., multi-user games and/or multiple users experiencing the same environment). In such situation, a shared virtual session may be initiated by a first user operating a first HMD 105 (also referred to as “sharing device”) that generates and shares a coordinate system of the environment (e.g., map of the hologram object(s) and physical table in the first user's real environment), including, for instance, a “spatial anchor,” to a set of one or more second HMDs 105. A spatial anchor represents point in the coordinate system of the environment that the system should keep track of over time. For instance, in one example, a spatial anchor may be a set of one or more points that identify a position and/or orientation of a real object in the real world environment. Spatial anchors of one HMD 105 may be adjusted, as needed, relative to other anchors or frames of reference of one or more other HMDs 105, in order to ensure that anchored holograms stay precisely in place in a location that is synchronized in the shared coordinate system of the environment shared by the group of HMDs 105 in the shared session. [0032] Rendering a hologram based on a known position and/or orientation of an anchor within the shared coordinate system provides the most accurate positioning for that hologram at any given time in the shared virtual session.)
storing a 3D position of the real world point in a coordinate frame of the HMD; (Lazarow: [0032] Rendering a hologram based on a known position and/or orientation of an anchor within the shared coordinate system provides the most accurate positioning for that hologram at any given time in the shared virtual session. For example, mixed reality applications may map hologram objects into the real environment as if the hologram object appears as a real object. For example, a hologram object such as a virtual chessboard may be mapped into a real environment, such as by being placed on top of a physical table that is located in the user's room as captured by the cameras of the HMD 105. In the above example, the x-y-z coordinates and position of the physical table upon which the hologram object is mapped may be the “spatial anchor” that is fixed in the shared coordinate system of the user's real environment. The position and/or orientation of the pieces of the virtual chess board may be modified and adjusted by user's input (gestures) that may move the pieces as if they were real objects. [0033] Thus, in order to facilitate a shared virtual session, the first HMD 105 may share the shared coordinate system of the environment (virtual and/or real environment) associated with the shared virtual session with one or more second HMDs 105. The shared coordinate system may include one or more of the hologram object information as well as the spatial anchor position (x-y-z coordinates and orientation) of the physical table in the user's environment. As the first HMD 105 is responsible for mapping the virtual and real environment, the first HMD 105 may be knowledgeable of not only the coordinate system of the environment as observed in the field of view of the first HMD 105, but may also be knowledgeable of the entire environment in the shared coordinate system, which may include hologram objects and real environment objects outside the immediate field of view of the first HMD 105.)
waiting while the HMD wearer leaves the environment and later returns to the environment; (Lazarow: [0036] In turn, the first HMD 105 (e.g., the sharing device) may process the key frame(s) received from the second HMD 105 to determine information to provide to the second HMD 105 to align coordinate systems. For example, the first HMD 105 may determine the location and position of the second spatial anchor identified by the second HMD 105 within the shared coordinate system of the first user's environment. The first HMD 105, based on the greater knowledge of the environment than the second HMD 105, may determine the location of the second spatial anchor (e.g., corner of bookcase), and then determine the location of the second spatial anchor relative to the first spatial anchor (e.g., table) within the shared coordinate system of the environment associated with the shared session. By identifying the second spatial anchor position, the first HMD 105 may be able to determine the location of the second HMD 105 within the coordinate system of the environment associated with the shared session, and share the relative difference position information with respect to the first anchor, or the second anchor position inf [0037] FIGS. 2A-2C, discussed concurrently, show an example of initiating remote localization of HMDs 105 (e.g., a first HMD 105-a and a second HMD 105-b) in accordance with features of the present disclosure. Turning first to FIG. 2A, a first user 102 may use the first HMD 105-a to map environment 200 associated with a shared coordinate system. The environment 200 may be an example of a room in which a first user 102 generate VR/AR/MR images in a shared coordinate system information in the shared coordinate system, with the second HMD 105 to facilitate synchronization of two devices within the shared session such that the second HMD 105 is able to identify the position and orientation of the first spatial anchor (i.e., the table).)
and computing the hologram according to a current viewpoint of the HMD and taking into account the registration information; and projecting the hologram into pupils of the HMD wearer. (Lazarow: [0028] The HMD 105 may include optical components 115 (e.g., one or more lenses), including waveguides that may allow the HMD 105 to project images generated by a light engine included within (or external) to the HMD 105. The optical components 115 may use plate-shaped (usually planar) waveguides for transmitting angular image information to users' eyes as virtual images from image sources located out of the user's line of sight. The image information may propagate along the waveguides as a plurality of angularly related beams that are internally reflected along the waveguide. Diffractive optics are often used for injecting the image information into the waveguides through a first range of incidence angles that are internally reflected by the waveguides as well as for ejecting the image information through a corresponding range of lower incidence angles for relaying or otherwise forming an exit pupil behind the waveguides in a position that can be aligned with the users' eyes. Both the waveguides and the diffractive optics at the output end of the waveguides may be at least partially transparent so that the user can also view the real environment through the waveguides, such as when the image information is not being conveyed by the waveguides or when the image information does not fill the entire field of view. [0029]-[0030] Though not shown in FIG. 1, a processing apparatus 405, memory 410 and other components may be integrated into the HMD device 105 (see FIG. 5). Alternatively, such components may be housed in a separate housing connected to the HMD 105 by wired and/or wireless communication links. For example, the components may be housed in a separate computer device (e.g., smartphone, tablet, laptop or desktop computer, etc.) which communicates with the display device 105. Accordingly, mounted to or inside the HMD 105 may be an image source, such as a micro display for projecting a virtual image onto the optical component 115. As discussed above, the optical component 115 may be a collimating lens through which the micro display projects an image.
Consider Claim 4.
The combination of Gilles, Steines and Lazarow teaches:
4. The method of claim 2 wherein using the registration information to share a hologram between a plurality of HMD wearers comprises, receiving, from a first HMD wearer, a request to share a hologram, the request specifying a 3D location of the hologram in a coordinate frame of the first HMD; (Lazarow: [0046] However, as illustrated in FIGS. 2B and 2C, the second HMD 105-b may be able to identify a second spatial anchor 240 (e.g., a set of one or more coordinate points/positions associated with the bookshelf 215) within the field of view 210-b of the second HMD 105-b. As such, the second HMD 105-b (query device) may initiate a remote localization request 247 (FIG. 2B) that enlists the help of the first HMD 105-a (i.e., sharing device) that originally generated the shared virtual session. To facilitate the remote localization request 247, the second HMD 105-b may generate a key frame information 245 that includes a subset of available data. For example, as opposed to transmitting all available information that the second HMD 105-b may have at its disposal, the second HMD 105-b may generate a key frame 245 with, for example, an image frame of the current field of view 210-b of the second HMD 105-b that includes identification of the second spatial anchor 240. That is, while the second HMD 105-b may be uncertain of its own location or position with respect to the first spatial anchor 230 (e.g., table 220), the second HMD 105-b may be aware of its position with respect to the second spatial anchor 240 (e.g., the bookshelf 215 in the living room). Thus, the second HMD 105-b may transmit remote localization request 247 to the first HMD 105-a that includes the key frame 245, including the image frame and the second spatial anchor 240 (e.g., coordinates of the bookshelf 215). [0047] Upon receiving the remote localization request 247, the first HMD 105-a (e.g., sharing device) may locate the included key frame(s) 245 within the shared coordinate system, which in this case is its current spatial map of its environment 200 (e.g., by accessing information stored in the database of the first HMD device 105).)
setting the second coordinate frame to be a coordinate frame of a second HMD such that the registration information maps between the coordinate frames of the first HMD and the second HMD; triggering the second HMD to display the hologram transformed using the registration information. (Lazarow: [0047]-[0048] Turning next to FIG. 3, method 300 for synchronizing a first display device with a second display device within a shared coordinate system is described. The method 300 may be performed by a display device 105 (e.g., the second HMD 105-b) as described with reference to FIGS. 1 and 2A-C. It should be appreciated that the features of the method 300 may be incorporated not only in the HMDs, as described, but also in other display devices 105 such as mobile phones, tablets, or laptops. Although the method 300 is described below with respect to the elements of the display device, other components may be used to implement one or more of the steps described herein. [0049] At block 305, the method 300 may include establishing, between the first display device 105-a and the second display device 105-b, a shared coordinate system of an environment, wherein the shared coordinate system includes a first spatial anchor that is fixed in three dimensional space of the environment at a first position coordinates. In some examples, establishing a shared session may include receiving a mapping of at least one hologram object in a three dimensional space relative to a first spatial anchor that is fixed in the three dimensional space by position and orientation coordinates. The mapping may be performed by the first display device (e.g., first HMD 105-a) and received by the second display device (e.g., second HMD 105-b) for use in a collaborative multi-user shared virtual session. Aspects of block 305 may be performed by the communications component 515 and shared virtual session component 540 described with reference to FIG. 5.)
Consider Claim 5.
The combination of Gilles, Steines and Lazarow teaches:
5. The method of claim 2 wherein using the registration information to anchor a hologram to a real world environment comprises setting the second coordinate frame to be a coordinate frame of a 3D model used to form the hologram. (Lazarow: [0046] However, as illustrated in FIGS. 2B and 2C, the second HMD 105-b may be able to identify a second spatial anchor 240 (e.g., a set of one or more coordinate points/positions associated with the bookshelf 215) within the field of view 210-b of the second HMD 105-b. As such, the second HMD 105-b (query device) may initiate a remote localization request 247 (FIG. 2B) that enlists the help of the first HMD 105-a (i.e., sharing device) that originally generated the shared virtual session. To facilitate the remote localization request 247, the second HMD 105-b may generate a key frame information 245 that includes a subset of available data. For example, as opposed to transmitting all available information that the second HMD 105-b may have at its disposal, the second HMD 105-b may generate a key frame 245 with, for example, an image frame of the current field of view 210-b of the second HMD 105-b that includes identification of the second spatial anchor 240. That is, while the second HMD 105-b may be uncertain of its own location or position with respect to the first spatial anchor 230 (e.g., table 220), the second HMD 105-b may be aware of its position with respect to the second spatial anchor 240 (e.g., the bookshelf 215 in the living room). Thus, the second HMD 105-b may transmit remote localization request 247 to the first HMD 105-a that includes the key frame 245, including the image frame and the second spatial anchor 240 (e.g., coordinates of the bookshelf 215). [0047] Upon receiving the remote localization request 247, the first HMD 105-a (e.g., sharing device) may locate the included key frame(s) 245 within the shared coordinate system, which in this case is its current spatial map of its environment 200 (e.g., by accessing information stored in the database of the first HMD device 105). If the first HMD 105-a identifies the key frame in its database, the first HMD 105-a may calculate transform information 250 between the first spatial anchor 230 and the second spatial anchor 240, and transmit the transform information 250 from the first HMD 105-a to the second HMD 105-b such that the second HMD 105-b may synchronize to the shared coordinate system used by the two devices by identifying the location of the second HMD 105-b within the environment 200 relative to the first spatial anchor 230. For example, the transform information 250 may be relative difference information, e.g., any information that identifies the relative location and/or relative orientation of the second spatial anchor 240 with respect to the first spatial anchor 230. Thus, the second HMD 105-b can identify the exact location of the first spatial anchor 2303 (by applying the relative difference to its known location of the second spatial anchor 240), and thereby align its coordinate system with the shared coordinate system based on the known location of the first spatial anchor 230 in the shared coordinate system. Thus, the two HMDs will be synched within the shared coordinate system.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAHMINA ANSARI whose telephone number is 571-270-3379. The examiner can normally be reached on IFP Flex - Monday through Friday 9 to 5.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’NEAL MISTRY can be reached on 313-446-4912. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. TC 2600’s customer service number is 571-272-2600.
Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600.
2674
/Tahmina Ansari/
December 23, 2025
/TAHMINA N ANSARI/Primary Examiner, Art Unit 2674