DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse in the reply filed on 5/25/2018 is acknowledged. Claims 26, 56, 57, 82-99 is/are currently pending in the present application.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim(s) 57 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of patent US 11786206. Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claim(s) is/are an obvious variation of the patented claims, or entirely covered by the patented claims.
For example, claim 57 of the instant application discloses A system of preparing an imaging data acquisition associated with a patient comprising: at least one computer processor; an augmented reality display device; and an imaging system. These limitations are all disclosed by the claim 1 of the patent US 11786206. Therefore, claim 57 of the instant application is covered by the claim 1 of the patent US 11786206, and is/are not patently distinct from the mentioned patent claim.
The following table illustrates a comparative mapping between the limitations of claim 57 of the instant application and the mapping claim 1 of patent US 11786206.
Claim 57 of the Instant Application 18465388
Claim 1 of the Patent 11786206
A system of preparing an imaging data acquisition associated with a patient comprising:
at least one computer processor; an augmented reality display device; and an imaging system, wherein the at least one computer processor is configured to obtain real-time tracking information of one or more components of the imaging system,
A method of preparing an image acquisition by an imaging system in a patient comprising: a. tracking one or more components of the imaging system in real time, wherein the imaging system uses ionizing radiation;
b. obtaining, by at least one computer processor, information about a geometry of the one or more components of the imaging system, information about a geometry of the image acquisition, information about one or more image acquisition parameters, or a combination thereof;
wherein the at least one computer processor is configured to generate a 3D representation of a surface, a volume or combination thereof, wherein the 3D representation of the surface, volume or combination thereof is at least in part derived from information about a geometry of the one or more components of the imaging system, information about a geometry of the image acquisition, information about one or more image acquisition parameters, or a combination thereof,
c. generating, by the at least one computer processor, a 3D representation of a surface, a volume or combination thereof, wherein the 3D representation of the surface, the volume or combination thereof is at least in part derived from the information about the geometry of the one or more components of the imaging system, information about the geometry of the image acquisition, information about the one or more image acquisition parameters, or combination thereof;
wherein the at least one computer processor is configured to generate an augmented view, the augmented view comprising the 3D representation of the surface, volume or combination thereof,
d. generating, by the at least one computer processor, an augmented view, the augmented view comprising the 3D representation of the surface, volume or combination thereof;
wherein the at least one computer processor is configured to display, by the augmented reality display device, the augmented view at a defined position and orientation relative to the one or more components of the imaging system, and
e. displaying, by an augmented reality display device, the augmented view onto the patient at a position and orientation relative to the one or more components of the imaging system;
wherein the position and orientation of the augmented view is updated based on the real time tracking information of the one or more components of the imaging system.
f. updating in real time the position and orientation of the augmented view based on real time tracking information of the one or more components of the imaging system so that the 3D representation is maintained in relationship to the one or more components of the imaging system as the imaging system moves; and
g. acquiring the image of the patient, wherein steps a. through f. are before the step of acquiring the image of the patient.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 26, 82, 83 86, 87, 91-93, 96-98 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nash et al. (US 20190038362) in view of Palushi et al. (20210121238).
Regarding claim 26, Nash discloses A system, comprising: at least one head mounted display, at least one camera or scanning device, wherein the at least one camera or scanning device is configured to track information of the at least one head mounted display, of at least one anatomic structure of a patient, and of at least one physical surgical tool or physical surgical instrument (Nash, “[0022] By utilizing multiple cameras, or a camera mesh, surgical navigation or robotics systems may properly track a surgical instrument, body part, or surgical landmark and eliminate problems that arise from a single camera described above. [0024] By utilizing the camera mesh and the computer image analysis system, objects involved in the surgical procedure may be tracked, such as instruments (e.g., scalpel, implant insertion tools), a body part, or an implant. [0025] FIG. 1 illustrates a surgical field camera system 100 in accordance with some embodiments. In FIG. 1, two members of the surgical team are shown, including a first surgeon 105 and a second surgeon 125. A first AR headset 110 is worn by the first surgeon 105 and a second AR headset 130 is worn by the second surgeon 125. First camera device 150 and second camera device 155 may be moveable camera devices. [0027] In an examples utilizing images from an AR headset camera, the AR headsets can be tracked within the surgical field through other cameras in the tracking system (such as first camera device 150 and second camera device 155) or through sensors internal to the AR headsets”),
a first computing system comprising one or more computer processors, wherein the first computing system is configured to obtain the tracking information of the at least one head mounted display, the at least one anatomic structure of a patient, and the at least one physical surgical tool or physical surgical instrument (Nash, “[0027] precise tracking of position and orientation of the AR headsets is needed to translate position and orientation information within images capture by the AR headsets into useful tracking data within the virtual 3D surgical field. [0052] The system may use information about the location and direction of each camera. The tracking data for an identified object may be generated from information extracted from at least two of the synchronized images. [0053] The tracking data may then be used to determine a position and an orientation of the tracked object within a virtual three-dimensional coordinate system. [0070] In the surgical field camera system, the computer image analysis system may be configured to generate tracking data using synchronized image captures from at least two cameras of the at least two camera devices”),
wherein the first computing system is configured for wireless transmission of the tracking information of the at least one head mounted display, the at least one anatomic structure of the patient, and the at least one physical surgical tool or physical surgical instrument (Nash, fig.10, “[0032] As an example, the computer image analysis system 160 may be configured to send data to each AR headset about the location of the tracked object 165. [0057] The surgical field camera system 200 with a communicatively connected robotic device may transmit position and the orientation of a tracked object to the robotic device. [0066] The system may continue to track position and orientation of the tracked object such that as the tracked object moves, the system transmits information to the surgeon's AR headset for the generated image of the tracked object to follow the movement of the tracked object. [0103] The camera devices may also include a wireless network interface, such as Bluetooth, and each camera device may be identified through near-field communication (NFC) detection of a radio frequency identifier (RFID)”),
a second computing system comprising one or more computer processors, wherein the second computing system is configured for wireless reception of the tracking information of the at least one head mounted display, the at least one anatomic structure of the patient, and the at least one physical surgical tool or physical surgical instrument (Nash, “[0053] Should the line-of-sight for camera device A be disrupted, the system may keep tracking the object with camera devices B and C, based on, in part, the previous tracking data received from camera device A which may help indicate the location the object may be at. [0076] The technique 900 may include using a surgical robot to receive a position and orientation for the tracked object and actuating a surgical tool with the surgical robot in response to the position and orientation of the tracked object. [0081] Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. [0112] In Example 8, the subject matter of Examples 1-7 includes, a controller configured to: receive the position and the orientation of the tracked object;”),
wherein the second computing system is configured to generate a 3D stereoscopic view, wherein the stereoscopic view comprises a 3D representation of the at least one tracked physical surgical tool or physical surgical instrument, and wherein the at least one head mounted display is configured to display the 3D stereoscopic view (Nash, “[0035] By utilizing multiple sources for the tracking of an object, the surgical field camera system may construct a virtual three-dimensional representation of the object based on a set of multiple viewpoints provided by the multiple sources. [0063] an augmented view 300 includes the AR viewpoint, for example, as seen by a surgeon wearing an AR headset. In the example augmented view 300, a scalpel 320 may be the tracked object. The scalpel 320 is highlighted with brackets 325 and arrows 305 to indicate the location. The augmented view 300 may include additional information 310 for the surgeon such as product name and condition. The system may be tracking other objects, such as instrument 315 such that when the surgeon requires the instrument, the system may locate it. [0117] In Example 13, the subject matter of Examples 1-12 includes, a wearable headset configured to display data corresponding to the position and the orientation of the tracked object, overlaid with at least one of the tracked object, an instrument attached to the tracked object, or an anatomical element attached to the tracked object”).
On the other hand, Nash fails to explicitly disclose but Palushi discloses real-time tracking information (Palushi, “[0048] Such associated information may include, for example, CT images, configured surgical paths, real time surgical instrument tracking, real time patient tracking, configured points of interest indicating areas that should be investigated or avoided, and other similar information that may be correlated to a coordinate system for IGS navigation, which may be collectively referred to herein as correlated datasets”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Nash and Palushi, to include all limitations of claim 26. That is, applying the real-time tracking of Palushi to track the information of Nash. The motivation/ suggestion would have been to improve the accuracy and safety with which a surgical instrument is navigated to a particular location within a patient's body (Palushi, [0002]).
Regarding claim 82, Nash in view of Palushi discloses The system of claim 26.
On the other hand, Nash fails to explicitly disclose but Palushi discloses wherein the one or more computer processors of the second computing system generate the 3D stereoscopic view for a view angle of the head mounted display relative to the at least one anatomic structure of the patient using the real-time tracking information of the at least one head mounted display (Palushi, “Abstract, The orientation and distance of the head mounted display relative to the patient anatomy may be determined using magnetic position tracking, image analysis of images captured by a camera of the HMD, or both. Correlated datasets may then be transformed based on the relative distance and orientation and may be displayed via the transparent screen to overlay directly viewed patient anatomy. [0043] The display of the HHD (101) may commonly be a LED or LCD display, and so may not be capable of overlaying rendered markings onto a transparent surface through which objects are viewed directly, such as the display (104) might”). The same motivation of claim 26 applies here.
Regarding claim 83, Nash in view of Palushi discloses The system of claim 26, wherein real-time tracking information has been disclosed.
Nash further discloses wherein the tracking information comprises tracking information of multiple head mounted displays (Nash, “[0025] FIG. 1 illustrates a surgical field camera system 100 in accordance with some embodiments. In FIG. 1, two members of the surgical team are shown, including a first surgeon 105 and a second surgeon 125. A first AR headset 110 is worn by the first surgeon 105 and a second AR headset 130 is worn by the second surgeon 125”).
Regarding claim 86, Nash in view of Palushi discloses The system of claim 26, wherein real-time tracking information has been disclosed.
Nash further discloses wherein the tracking information comprises tracking information of two or more head mounted displays (Nash, “[0027] In an examples utilizing images from an AR headset camera, the AR headsets can be tracked within the surgical field through other cameras in the tracking system (such as first camera device 150 and second camera device 155) or through sensors internal to the AR headsets”).
Regarding claim 87, Nash in view of Palushi discloses The system of claim 86.
Nash further discloses wherein the two or more head mounted displays are located in different locations (Nash, fig.2, it shows the two headsets are located in different locations).
Regarding claim 91, Nash in view of Palushi discloses The system of claim 26.
Nash further discloses wherein the second computing system is integrated with the at least one head mounted display, or wherein the second computing system is separate from the at least one head mounted display and is connected to a display unit of the at least one head mounted display using at least one cable (Nash, fig.2, “[0078] The machine 1000 may include an output controller 1028, such as a serial (e.g., Universal Serial Bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.)”).
Regarding claim 92, Nash in view of Palushi discloses The system of claim 26.
Nash further discloses wherein the wireless transmission, the wireless reception, or both comprise a WiFi signal, a LiFi signal, a Bluetooth signal or a combination thereof (Nash, “[0060] In an example, the location and orientation of the AR camera may be determined using one or more sensors located on the AR headset. For example, an accelerometer, gyroscope, magnetometer, GPS, local positioning system sensor (e.g., using NFC, RFID, beacons, Wi-Fi, or Bluetooth within a surgical field), or the like may be used. [0103] The camera devices may also include a wireless network interface, such as Bluetooth, and each camera device may be identified through near-field communication (NFC) detection of a radio frequency identifier (RFID)”).
Regarding claim 93, Nash in view of Palushi discloses The system of claim 26.
Nash further discloses wherein the camera or scanning device is separate from the at least one head mounted display or wherein the camera or scanning device is integrated or attached to the at least one head mounted display (Nash, fig.2, “[0046] This may be done in the AR system virtually by determining that the hand has moved into a position coincident or adjacent to the virtual object (e.g., using one or more cameras, which may be mounted on an AR device or separate, and which may be static or may be controlled to move), and causing the virtual object to move in response”).
Regarding claim 96, Nash in view of Palushi discloses The system of claim 26.
On the other hand, Nash fails to explicitly disclose but Palushi discloses wherein the real-time tracking information comprises one or more coordinates (Palushi, “[0048] To address this, the visualization system (60) provides a framework for relating the coordinate system and associated information to the physical world perceived by the wearer of the HMD (100). Such associated information may include, for example, CT images, configured surgical paths, real time surgical instrument tracking, real time patient tracking, configured points of interest indicating areas that should be investigated or avoided, and other similar information that may be correlated to a coordinate system for IGS navigation, which may be collectively referred to herein as correlated datasets. [0051] The received (block 304) correlated datasets may also include data that is captured in real-time during the procedure and then associated with the coordinate system, such as position tracking data indicating the location of the guidewire (40) and other tracked surgical instruments, and position tracking data indicating the location of the HMD (100)”). The same motivation of claim 26 applies here.
Regarding claim 97, Nash in view of Palushi discloses The system of claim 96.
On the other hand, Nash fails to explicitly disclose but Palushi discloses wherein the one or more coordinates comprise coordinates of the at least one anatomic structure of the patient, coordinates of the at least one physical surgical tool or physical surgical instrument, or coordinates of the at least one head mounted display (Palushi, “[0051] The received (block 304) correlated datasets may also include data that is captured in real-time during the procedure and then associated with the coordinate system, such as position tracking data indicating the location of the guidewire (40) and other tracked surgical instruments, and position tracking data indicating the location of the HMD (100)”). The same motivation of claim 26 applies here.
Regarding claim 98, Nash in view of Palushi discloses The system of claim 26.
Nash further discloses wherein the at least one head mounted display comprises at least one optical see-through head mounted display (Nash, “[0019] AR devices typically include two display lenses or screens, including one for each eye of a user. Light is permitted to pass through the two display lenses such that aspects of the real environment are visible while also projecting light to make virtual elements visible to the user of the AR device. [0025] FIG. 1 illustrates a surgical field camera system 100 in accordance with some embodiments. In FIG. 1, two members of the surgical team are shown, including a first surgeon 105 and a second surgeon 125. A first AR headset 110 is worn by the first surgeon 105 and a second AR headset 130 is worn by the second surgeon 125”).
Claim(s) 56, 84, 88, 89 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nash et al. (US 20190038362) in view of Palushi et al. (20210121238), and further in view of CHEN (20190094989).
Regarding claim 56, Nash discloses A system, comprising: two or more head mounted displays, at least one camera or scanning device, wherein the at least one camera or scanning device is configured to track information of the two or more head mounted displays, of at least one anatomic structure of a patient, and of at least one physical surgical tool or physical surgical instrument (Nash, fig.2, “[0022] By utilizing multiple cameras, or a camera mesh, surgical navigation or robotics systems may properly track a surgical instrument, body part, or surgical landmark and eliminate problems that arise from a single camera described above. [0024] By utilizing the camera mesh and the computer image analysis system, objects involved in the surgical procedure may be tracked, such as instruments (e.g., scalpel, implant insertion tools), a body part, or an implant. [0025] FIG. 1 illustrates a surgical field camera system 100 in accordance with some embodiments. In FIG. 1, two members of the surgical team are shown, including a first surgeon 105 and a second surgeon 125. A first AR headset 110 is worn by the first surgeon 105 and a second AR headset 130 is worn by the second surgeon 125. First camera device 150 and second camera device 155 may be moveable camera devices. [0027] In an examples utilizing images from an AR headset camera, the AR headsets can be tracked within the surgical field through other cameras in the tracking system (such as first camera device 150 and second camera device 155) or through sensors internal to the AR headsets”),
a first computing system comprising one or more computer processors, wherein the first computing system is configured to obtain the tracking information of the at least one anatomic structure of a patient, of the at least one physical surgical tool or physical surgical instrument, and of the two or more head mounted displays (Nash, fig.2, “[0027] precise tracking of position and orientation of the AR headsets is needed to translate position and orientation information within images capture by the AR headsets into useful tracking data within the virtual 3D surgical field. [0052] The system may use information about the location and direction of each camera. The tracking data for an identified object may be generated from information extracted from at least two of the synchronized images. [0053] The tracking data may then be used to determine a position and an orientation of the tracked object within a virtual three-dimensional coordinate system. [0070] In the surgical field camera system, the computer image analysis system may be configured to generate tracking data using synchronized image captures from at least two cameras of the at least two camera devices”),
wherein the first computing system is configured for wireless transmission of the tracking information of the at least one anatomic structure of the patient, the tracking information of the at least one physical surgical tool or physical surgical instrument, and the tracking information of the two or more head mounted displays (Nash, fig.2, fig.10, “[0032] As an example, the computer image analysis system 160 may be configured to send data to each AR headset about the location of the tracked object 165. [0057] The surgical field camera system 200 with a communicatively connected robotic device may transmit position and the orientation of a tracked object to the robotic device. [0066] The system may continue to track position and orientation of the tracked object such that as the tracked object moves, the system transmits information to the surgeon's AR headset for the generated image of the tracked object to follow the movement of the tracked object. [0103] The camera devices may also include a wireless network interface, such as Bluetooth, and each camera device may be identified through near-field communication (NFC) detection of a radio frequency identifier (RFID)”),
a second computing system, wherein the second computing system is configured for wireless reception of the tracking information of the at least one anatomic structure of the patient, the tracking information of the at least one physical surgical tool or physical surgical instrument, and the tracking information of the first of the two or more head mounted displays (Nash, fig.2, fig.10, “[0032] As an example, the computer image analysis system 160 may be configured to send data to each AR headset about the location of the tracked object 165. [0057] The surgical field camera system 200 with a communicatively connected robotic device may transmit position and the orientation of a tracked object to the robotic device. [0066] The system may continue to track position and orientation of the tracked object such that as the tracked object moves, the system transmits information to the surgeon's AR headset for the generated image of the tracked object to follow the movement of the tracked object. [0103] The camera devices may also include a wireless network interface, such as Bluetooth, and each camera device may be identified through near-field communication (NFC) detection of a radio frequency identifier (RFID)”),
wherein the second computing system is configured to generate a first 3D stereoscopic display specific for a first viewing perspective of the first head mounted display using the tracking information of the first head mounted display, wherein the first head mounted display is configured to display the 3D stereoscopic display (Nash, “[0035] By utilizing multiple sources for the tracking of an object, the surgical field camera system may construct a virtual three-dimensional representation of the object based on a set of multiple viewpoints provided by the multiple sources. [0063] an augmented view 300 includes the AR viewpoint, for example, as seen by a surgeon wearing an AR headset. In the example augmented view 300, a scalpel 320 may be the tracked object. The scalpel 320 is highlighted with brackets 325 and arrows 305 to indicate the location. The augmented view 300 may include additional information 310 for the surgeon such as product name and condition. The system may be tracking other objects, such as instrument 315 such that when the surgeon requires the instrument, the system may locate it. [0117] In Example 13, the subject matter of Examples 1-12 includes, a wearable headset configured to display data corresponding to the position and the orientation of the tracked object, overlaid with at least one of the tracked object, an instrument attached to the tracked object, or an anatomical element attached to the tracked object”).
a third computing system, wherein the third computing system is configured for wireless reception of the tracking information of the at least one anatomic structure of the patient, the tracking information of the at least one physical surgical tool or physical surgical instrument, and the tracking information of the second of the two or more head mounted displays (Nash, fig.10, “[0024] By utilizing the camera mesh and the computer image analysis system, objects involved in the surgical procedure may be tracked, such as instruments (e.g., scalpel, implant insertion tools), a body part, or an implant. [0032] As an example, the computer image analysis system 160 may be configured to send data to each AR headset about the location of the tracked object 165. [0057] The surgical field camera system 200 with a communicatively connected robotic device may transmit position and the orientation of a tracked object to the robotic device. [0066] The system may continue to track position and orientation of the tracked object such that as the tracked object moves, the system transmits information to the surgeon's AR headset for the generated image of the tracked object to follow the movement of the tracked object. [0103] The camera devices may also include a wireless network interface, such as Bluetooth, and each camera device may be identified through near-field communication (NFC) detection of a radio frequency identifier (RFID)”),
wherein the third computing system is configured to generate a second 3D stereoscopic display specific for a second viewing perspective of the second head mounted display using the tracking information of the second head mounted display (Nash, “[0033] The computer image analysis system 160 may utilize the captured images from second AR camera 140 to determine the position of tracked object 165 in reference to the direction of second AR headset 130 and transmit information to second AR headset 130 such that the second AR headset display 135 displays an indication of the location of the tracked object 165 for second surgeon 125.”), and
wherein the first and second stereoscopic displays comprise a 3D representation of the at least one physical surgical tool or physical surgical instrument (Nash, “[0033] The computer image analysis system 160 may utilize the captured images from first AR camera 120 to determine the visible aspects of tracked object 165 and transmit information to first AR headset 110 such that the first AR headset display 115 highlights the visible portions of the tracked object 165 for first surgeon 105. The computer image analysis system 160 may utilize the captured images from second AR camera 140 to determine the position of tracked object 165 in reference to the direction of second AR headset 130 and transmit information to second AR headset 130 such that the second AR headset display 135 displays an indication of the location of the tracked object 165 for second surgeon 125. [0035] By utilizing multiple sources for the tracking of an object, the surgical field camera system may construct a virtual three-dimensional representation of the object based on a set of multiple viewpoints provided by the multiple sources. [0063] an augmented view 300 includes the AR viewpoint, for example, as seen by a surgeon wearing an AR headset. In the example augmented view 300, a scalpel 320 may be the tracked object. [0117] In Example 13, the subject matter of Examples 1-12 includes, a wearable headset configured to display data corresponding to the position and the orientation of the tracked object, overlaid with at least one of the tracked object, an instrument attached to the tracked object, or an anatomical element attached to the tracked object”).
On the other hand, Nash fails to explicitly disclose but Palushi discloses real-time tracking information (Palushi, “[0048] Such associated information may include, for example, CT images, configured surgical paths, real time surgical instrument tracking, real time patient tracking, configured points of interest indicating areas that should be investigated or avoided, and other similar information that may be correlated to a coordinate system for IGS navigation, which may be collectively referred to herein as correlated datasets”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Nash and Palushi. That is, applying the real-time tracking of Palushi to track the information of Nash. The motivation/ suggestion would have been to improve the accuracy and safety with which a surgical instrument is navigated to a particular location within a patient's body (Palushi, [0002]).
On the other hand, Nash in view of Palushi fails to explicitly disclose but CHEN discloses wherein the tracking information of the two or more head mounted displays is labeled for each of the two or more head mounted displays (CHEN, claim 9, “wherein the first spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the first identification, the second spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the second identification”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined CHEN into the combination of Nash and Palushi, to include all limitations of claim 56. That is, adding the identification label of CHEN to the real-time tracking information of the headsets of Nash and Palushi. The motivation/ suggestion would have been allocating a position of a head mounted display device (CHEN, [0001]).
Regarding claim 84, Nash in view of Palushi discloses The system of claim 83, wherein real-time tracking information has been disclosed.
On the other hand, Nash in view of Palushi fails to explicitly disclose but CHEN discloses wherein the tracking information comprises a head mounted display specific label or tag for each head mounted display, or wherein the real-time tracking information is labeled for each tracked head mounted display (CHEN, claim 9, “wherein the first spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the first identification, the second spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the second identification”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined CHEN into the combination of Nash and Palushi, to include all limitations of claim 84. That is, adding the identification label of CHEN to the real-time tracking information of Nash and Palushi. The motivation/ suggestion would have been allocating a position of a head mounted display device (CHEN, [0001]).
Regarding claim 88, Nash in view of Palushi discloses The system of claim 86, wherein real-time tracking information has been disclosed.
On the other hand, Nash in view of Palushi fails to explicitly disclose but CHEN discloses wherein the tracking information comprises a head mounted display label for each head mounted display, wherein each head mounted display has a different label (CHEN, claim 9, “wherein the first spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the first identification, the second spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the second identification”). The same motivation of claim 84 applies here.
Regarding claim 89, Nash in view of Palushi discloses The system of claim 86, wherein real-time tracking information has been disclosed.
On the other hand, Nash in view of Palushi fails to explicitly disclose but CHEN discloses wherein the tracking information is labeled for each tracked head mounted display (CHEN, claim 9, “wherein the first spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the first identification, the second spatial coordinates transmitted from the tracking device to the first attachable device, the first head mounted display device, the second attachable device or the second head mounted display device is labelled with the second identification”). The same motivation of claim 84 applies here.
Claim(s) 57 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schweizer et al. (US 20210038181).
Regarding claim 57, Schweizer discloses A system of preparing an imaging data acquisition associated with a patient (Schweizer, fig.1, “[0002] The present embodiments relate to position planning for a recording system of a medical imaging device”) comprising:
at least one computer processor; an augmented reality display device; and an imaging system (Schweizer, “[0033] FIG. 1 shows one embodiment of a medical imaging device with a C-arm 1, to which an X-ray detector 2 and an X-ray source 3 are fastened. The X-ray source 3 may emit an X-ray beam 4 additionally shaped or shapeable by a collimator (not shown). The X-ray beam 4 penetrates a patient 5 supported on a patient couch 6. A position and/or a covering of the C-arm and a position of the patient 5 (e.g., in the form of a patient covering) are acquired, for example, by an acquisition system (e.g., a tracking system with a three-dimensional (3D) tracking camera 14). The recording system (e.g., the C-arm 1 with the X-ray source 3 and the X-ray detector 2) may be moved with respect to the patient 5 (e.g., may be rotated and translated). The imaging device is controlled by a system controller 13 that controls emission of the X-ray radiation and movements of the recording system (e.g., when instructed or automatically). In addition, the imaging device also has a calculating unit 17 (e.g., a calculator) and an operating unit with a display unit (e.g., a touch monitor 16 with a display 18). The imaging device may be used for carrying out a method of the present embodiments.”),
wherein the at least one computer processor is configured to obtain real-time tracking information of one or more components of the imaging system (Schweizer, “[0009] the current position of the patient, or of parts of the patient, and the position (e.g., the contour) of the imaging system (e.g., the mobile C-arm device), recording system (e.g., only X-ray source and X-ray detector), or information relating to the collimator system is acquired by suitable, known position-determining methods (e.g., tracking methods) once, continuously, or in predefined intervals. [0024] the device also includes a tracking system for acquiring current position information of the recording system and/or the imaging device and/or setting information of a collimator, and for acquiring current position information of the patient. [0045] The current position information of the recording system and/or the imaging device and/or the setting information of a collimator of the imaging device and/or the current position information of the patient may be determined once or may also be regularly or continuously updated”),
wherein the at least one computer processor is configured to generate a 3D representation of a surface, a volume or combination thereof, wherein the 3D representation of the surface, volume or combination thereof is at least in part derived from information about a geometry of the one or more components of the imaging system, information about a geometry of the image acquisition, information about one or more image acquisition parameters, or a combination thereof (Schweizer, “[0036] In act 24, a current intersection volume between the X-ray beam 4 and the patient 5/the patient covering, and/or a current field of view or a current recording volume is determined from the current course of the X-ray beam and the position of the patient/the patient covering. This may be carried out, for example, in that the acquired positions and data are forwarded to the control unit 13 of the imaging device. Consequently, the control unit knows the current position of the image system and patient/patient covering/organ covering relative to each other or may determine the current position”),
wherein the at least one computer processor is configured to generate an augmented view, the augmented view comprising the 3D representation of the surface, volume or combination thereof, wherein the at least one computer processor is configured to display, by the augmented reality display device, the augmented view at a defined position and orientation relative to the one or more components of the imaging system (Schweizer, “[0013] the 3D reconstruction volume (e.g., a cube) technically possible with the imaging device is superimposed (e.g., in the original size) on the patient or the patient covering. [0037] In act 25, the current intersection volume 7 and/or the current field of view or current recording volume is/are displayed as a virtual display element 8 (illustrated in FIG. 2 and enlarged in FIG. 3) on, for example, a display 18 of a touch monitor 16. These may be jointly displayed with the current patient covering or the current patient position to give the user an exact representation of reality. In the 3D case, for example, a currently recordable recording volume may be determined by the technically possible 3D reconstruction volume, or the one possible in relation to the position of the imaging device, and the patient covering of the patient 5”).
On the other hand, the above embodiment of Schweizer fails to explicitly disclose but another embodiment of Schweizer discloses wherein the position and orientation of the augmented view is updated based on the real time tracking information of the one or more components of the imaging system (Schweizer, figs. 4&5, “[0014] acquiring or receiving current position information of the recording system and/or the imaging device and/or setting information of a collimator of the imaging device; determining a current 3D reconstruction volume that may be recorded by the recording system during a current position; determining a current recording volume from the current 3D reconstruction volume and a patient covering; displaying the current recording volume as a virtual display element. [0037] In the 3D case, for example, a currently recordable recording volume may be determined by the technically possible 3D reconstruction volume, or the one possible in relation to the position of the imaging device, and the patient covering of the patient 5. [0045] The current position information of the recording system and/or the imaging device and/or the setting information of a collimator of the imaging device and/or the current position information of the patient may be determined once or may also be regularly or continuously updated”. Since the currently recordable recording volume (e.g., the virtual display element 8) is determined by the position of the imaging device, continuously updating the position information of the imaging device indicates updating in real time the position and orientation of the virtual display element).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the above embodiments of Schweizer, to include all limitations of claim 57. That is, applying the continuously updated position information of the second embodiment to the first embodiment. The motivation/ suggestion would have been Reliable functioning of the method may be provided in this way (Schweizer, [0018]).
Claim(s) 85 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nash et al. (US 20190038362) in view of Palushi et al. (20210121238), and further in view of MITCHELL (US 20220301265).
Regarding claim 85, Nash in view of Palushi discloses The system of claim 83.
On the other hand, Nash in view of Palushi fails to explicitly disclose but MITCHELL discloses wherein the wireless transmission is a multicast or broadcast transmission to the multiple head mounted displays (MITCHELL, “[0264] The transformation may be inputted into the glasses 700 via the I/O device 712, or wirelessly through the Wi-Fi microcontroller 716, and stored in the storage device 711. claim 87, wherein a plurality of users uses a respective plurality of headsets at the construction site and wherein the transformation is continually updated and broadcast simultaneously and wirelessly to the plurality of headsets”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined MITCHELL into the combination of Nash and Palushi, to include all limitations of claim 85. That is, adding the broadcast of MITCHELL to the wireless transmission of Nash and Palushi. The motivation/ suggestion would have been the transformation can be broadcast simultaneously to all users for example using the Wi-Fi microcontrollers in each set of glasses (MITCHELL, [0274]).
Claim(s) 90 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nash et al. (US 20190038362) in view of Palushi et al. (20210121238), and further in view of Wang (US 20160323569).
Regarding claim 90, Nash in view of Palushi discloses The system of claim 26.
On the other hand, Nash in view of Palushi fails to explicitly disclose but Wang discloses wherein the one or more computer processors of the second computing system generate the 3D stereoscopic view for an interpupillary distance adjusted for a user wearing the head mounted display (Wang, “[0002] The present invention relates to head-mounted 3D displays, and more particularly to a method for adjusting an interpupillary distance of a head-mounted 3D display, a system for adjusting an interpupillary distance of a head-mounted 3D display and a module for adjusting an interpupillary distance of a head-mounted 3D display. [0012] One embodiment of the present invention provides a module for adjusting an interpupillary distance of a head-mounted 3D display comprising an interpupillary distance scanning unit and a processor unit”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Wang into the combination of Nash and Palushi, to include all limitations of claim 90. That is, adding the interpupillary distance adjustment function of Wang to the headsets of Nash and Palushi. The motivation/ suggestion would have been A head-mounted 3