DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on August 9, 2024, January 13, 2025, January 17, 2025, April 2, 2025, April 17, 2025, May 21, 2025, October 29, 2025, November 13, 2025 and December 18, 2025 were filed on/after the filing date of the application on August 9, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
The drawings were received on August 9, 2025. These drawings are accepted.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-9, 12-16, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Johnson et al. (US 2019/0254754) in view of Dorman (US 2022/0237817).
As to claim 1, Johnson et al. disclose a method for orienting a surgical tool at a desired three-dimensional insertion angle at a desired location within an environment for use in installing a medical device by using and displaying at least one graphical element ([0082] notes head mounted display (HMD) can be used to enable a surgeon to observe a real-world feature that aids with orienting and moving surgical tool(s), e.g. attached to a robotic surgical system) during a procedure), the method (e.g. process 1200 of Figure 12 (used for simplicity, but may encompass processes of Figures 13 and 14)) comprising: initiating a smart headset (e.g. augmented reality navigation system, e.g. also referred as head mounted display (HMD) 100/600, [0093]) to be calibrated to the environment so that a position of the smart headset is known relative to the environment when the smart headset moves in the environment (e.g. Figure 9, [0098] and [0115] thru [0118] notes HMD 100/600 may include a motion sensor 604 such as an inertial motion unit (IMU), gyroscope, accelerometer, magnetometer, and/or tilt sensor that outputs a head motion signal indicating a measurement of movement or static orientation of the user’s head while wearing the HMD 100, e.g. a yaw movement and/or pitch movement, where Figure 11, [0132] further notes positioning of an HMD can include navigation coordinate system data determined from a location of real-world features attached to the HMD and/or inertial coordinate system data from a motion sensor attached to the HMD, the navigation coordinate system data and the inertial coordinate system data can be compensated for initial calibration and drift correction over time by a calibration module 912 and combined by a fusion module 914 to output combined HMD position data); receiving, by the smart headset from an electronic device communicatively coupled to the smart headset (e.g. receiving by navigation system 810 of HMD of Figure 11, where Figure 9 illustrates computer equipment 620 comprising at least general processor 626 and graphics processing unit (GPU) 638 in communication with HMD, where [0109] notes components of computer equipment 620 illustrated as separate from HMD, but some or all may be reside within HMD, where these components are further in communication with each of a gesture sensor 602, motion sensor 604, and a plurality of detectors 610), environmental data indicating the position of the surgical tool within the environment (e.g. environmental data indicating the position of the surgical tool within the environment, e.g. as “tool coordinates”)(step 1202, receive detector input signal, step 1204, determine a relative position and/or orientation for one or more real-world features, where Figure 11, [0133] notes a relative positioning module 916 identifies the relative position and angular orientation of each of the tracked real-world features 904-910 and the combined HMD position data, the module 916 may perform coordinate transformations of relative coordinate systems of, e.g. a surgical table, a patient, a surgical tool (and/or pointer tool), and HMD 100 to a unified (common) coordinate system, module 916 outputs sight coordinates data, patient module coordinates data, and tool coordinates data to an augmentation graphics generator module 918 and/or a robotic surgical system, where tool coordinates data can be generated based on a position and/or orientation of real-world features identified from or attached to a surgical tool 908 transformed to the unified coordinate system, and [0142] notes the relative position and/or orientation of one or more real-world features are determined based on the detector input signal received by a computer subsystem from one or more detectors mounted to the HMD of the augmented reality navigation system, one or more detectors mounted to a HMD of a second augmented reality navigation system, and/or one or more auxiliary detectors positioned throughout a surgical environment); receiving, by the smart headset from the electronic device (e.g. receiving by navigation system 810 of HMD 100/600), the desired three-dimensional insertion angle (e.g. the desired insertion angle, e.g. position and/or orientation of surgical tool, e.g. via user input)(e.g. step 1206, [0142], generate and/or access a representation of at least a portion of a surgical tool connected to a robotic surgical system and/or a trajectory of the surgical tool, where [0143] notes trajectories for the surgical tool may be provided to the computer subsystem by a robotic surgical system, e.g. a surgeon may move a robotic surgical system to a desired position and orientation and save a corresponding trajectory representation that is then later provided to the augmented reality navigation system in generating and/or accessing step 1206, the representation of at least a portion of a surgical tool and/or trajectory that is displayed on a display screen in exemplary method 1200 may be selected using input from a surgeon, for example, using a gesture, motion, or signal input); generating, by the smart headset (e.g. generating, via augmentation graphics generator module 918 of HMD 100/600), at least one graphical element comprising visual indicia for orienting the surgical tool at the desired three-dimensional insertion angle at the desired location (e.g. a representation of a portion of the surgical tool at the desired location, e.g. as an augmentation graphics)(step 1206, [0142], generate and/or access a representation of at least a portion of a surgical tool connected to a robotic surgical system and/or a trajectory of the surgical tool, and step 1208, modify (e.g. scale, translate, and/or rotate) at least a portion of the representation based on the relative position and/or orientation of the one or more real-world features determined, [0143] further notes generating and/or accessing a representation may include a trajectory representation); and displaying, by the smart headset (e.g. displaying, via display screen 110/608 of HMD 100/600), the at least one graphical element superimposed within the environment (e.g. the augmentation graphics of the surgical tool superimposed within the environment)(e.g. step 1210, display the surgical tool augmentation graphics, where [0142] further notes augmentation graphics corresponding to the representation are rendered and displayed on a display screen of the HMD of the augmented reality navigation system, e.g. the surgical tool augmentation graphics may display a surgical tool (or a portion thereof) that is not physically present and/or display a position of the surgical tool and/or its trajectory over a period of time such that a surgeon may watch how a tool will move along a trajectory during at least a portion of the procedure, e.g. [0136] further notes augmentation graphics generator module 918 transforms a patient model of the bone to generate an augmentation graphical representation of the bone that is displayed in the display screen 110 as a graphical overlay that matches the orientation and size of the bone from the perspective of the surgeon as-if the surgeon could view the bone through intervening layers of tissue and/or organs, likewise, at least a portion of a surgical tool, surgical apparatus, and/or robotic surgical system (e.g. that is covered by a patient’s anatomy) can appear as a graphical overlay matching the orientation and size of the physical object to the surgeon using augmentation graphics, e.g. Figure 11, [0137] and [0138] further notes augmentation graphics 922 displayed overlaid over the patient’s leg or in a virtual display screen; steps 1212 and 1214 notes optionally displaying augmentation graphics with respect to a trajectory).
Johnson et al. differs from the invention defined in claim 1 in that Johnson et al. do not expressly disclose its insertion angle as a “three-dimensional insertion angle.”
Dorman discloses a method for orienting a surgical tool at a desired three-dimensional insertion angle at a desired location within an environment for use in installing a medical device by using and displaying at least one graphical element (process of Figure 9, further illustrated in Figures 10 and 11), the method comprising: receiving (Figure 8 illustrates augmented reality or virtual reality based system 700 comprising electronic computing device 702 in communication with augmented or virtual reality device 704, e.g. headset, where [0126] notes augmented reality or virtual reality based system 700 for use in assisting of the determination of the proper insertion point and proper angle for a surgical tool to be used to install a pedicle screw described in reference to Figure 8)…the desired three-dimensional insertion angle (step 802, [0127] notes electronic computing device 702 simulating insertion point and orientation of simulated surgical hardware installation on diagnostic representation of bone, which includes steps 804-807, [0128], [0129], which are similar to steps 504-507 of Figure 5A, where [0096] notes once performing step 502 (including steps 504-507), a three-dimensional alignment angle may be calculated or determined, where [0135] further notes medical alignment device 300 may calculate a desired three-dimensional alignment angle based on inputs as described in connection with Figures 12 and 13, the apparatus 300 may be positioned relative to a tool to align the tool to the desired three-dimensional alignment angle); generating…at least one graphical element comprising visual indicia for orienting the surgical tool at the desired three-dimensional insertion angle at the desired location (e.g. generating a visual indicia for orientating the surgical tool at the desired three-dimensional alignment angle at the desired location)(step 803, [0127] notes using the augmented reality based electronic device to align an instrument for inserting a surgical hardware installation at a desired orientation through an insertion point of the bone by displaying visual indicia indicating the insertion point and the orientation of the simulated surgical hardware installation, where it is obvious that the visual indicia is generated in order to be displayed); and displaying, by the smart headset (augmented or virtual reality device 704, e.g. headset), the at least one graphical element superimposed within the environment (step 803, using the augmented reality based electronic device to align an instrument for inserting a surgical hardware installation at a desired orientation through an insertion point of the bone by displaying visual indicia indicating the insertion point and the orientation of the simulated surgical hardware installation, where Figure 11, [0130] notes the visual indicia may be superimposed over the bone itself, and may be a virtual representation of the tool 799 or may be an arrow, a line, or any other suitable visual representation, see also [0131], [0132]).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Johnson et al.’s method of orienting a surgical tool at a desired insertion angle with Dorman’s method of orienting a surgical tool at a desired three-dimensional insertion angle for a more accurate alignment insertion of the surgical tool to further avoid unintended injuries or consequences in relation to medical procedures and/or surgeries (see Background of Dorman).
As to claim 4, Johnson et al. modified with Dorman disclose the visual indicia comprises concentric circles indicating thresholds of the desired three-dimensional insertion angle of the surgical tool (modified with Dorman, Figures 14-17); the concentric circles comprise a first set of concentric circles at the desired location based on the desired three-dimensional insertion angle (modified with Dorman, e.g. first set of concentric circles 998) and a second set of concentric circles indicating a live orientation of the surgical tool (modified with Dorman, e.g. second set of concentric circles 999)(modified with Dorman, Figures 10 and 11, [0130] notes during surgery, virtual reality based or augmented reality based device 704 is worn by the operating physician, which is used to align an instrument or tool 701 for inserting a surgical hardware installation at a desired orientation through an insertion point of the bone by displaying visual indicia indicating the insertion point and the orientation of the simulated surgical hardware installation, the visual indicia can be shown superimposed over the bone itself, where Figures 14-17, [0136] thru [0145] notes illustration of a graphical indicator or notification of a series of two sets of concentric circles, e.g. first set of concentric circles 998 and second set of concentric circles 999, showing how the current position of the apparatus 300 is oriented relative to the desired alignment angle, as the orientation of the apparatus 300 is moved or aligned more closely to the desired three-dimensional alignment angle, or within a specified threshold, e.g. as illustrated in Figures 15 and 16, the concentric circles are moved closer to one another providing a graphical indication or feedback to assist a user or surgeon to align the apparatus, and hence an attached or adjacent tool, to the desired alignment angle) the electronic device is calibrated to the surgical tool to indicate the live orientation of the surgical tool (e.g. as noted in claim 1, Johnson, positioning of an HMD can include navigation coordinate system data determined from a location of real-world features attached to the HMD and/or inertial coordinate system data from a motion sensor attached to the HMD, the navigation coordinate system data and the inertial coordinate system data can be compensated for initial calibration and drift correction over time by a calibration module 912); and the environmental data comprises orientation data of the surgical tool (e.g. as noted in claim 1, Johnson, real-world features include position and/or orientation of surgical tool), and wherein the smart headset continually receives the environmental data from the electronic device in real-time (Johnson, [0081] notes the HMD configured to provide localized, real-time situational awareness to the wearer, where [0133] notes a registration is updated, e.g. continuously (e.g. at a certain refresh rate), to account for movements of an HMD throughout a surgical procedure that cause the position of the origin of the unified coordinate system in the physical space of a surgical environment to be changed).
As to claim 5, Johnson et al. modified with Dorman disclose in response to continually receiving the environmental data, automatically updating, by the smart headset in real-time (Johnson, e.g. as noted in claim 4, continuously updating as the physical environment changes), the at least one graphical element superimposed within the environment (e.g. as noted in claim 1, Johnson, the augmentation graphics displayed overlaid over the patient or in a virtual display screen; modified with Dorman, visual indicia superimposed over bone), wherein the smart headset comprises a gyroscope (Johnson, e.g. as noted in claim 1, motion sensor 604 may be a gyroscope), and wherein generating and displaying the at least one graphical element is based on continually collecting, by the gyroscope in real-time, orientation data of the smart headset (Johnson, [0011] notes the augmented reality navigation system comprises a motion sensor (e.g. an inertial motion unit (IMU) (by way of example, but noted above the motion sensor may include a gyroscope)) connected to the head mounted display for outputting a motion signal based on measured motion of the head mounted display and wherein the instructions cause the processor to: update, by the processor, the relative position and orientation of the determined real-world features in the detector input signal based on motion detected by the motion sensor, where as noted in claims 1 and 4, relative positioning module 916 identifies the relative position and angular orientation of each of tracked real-world features and the combined HMD position data, where a spatial position of an HMD (e.g. a position on the HMD, such as a position of a detector mounted to the HMD) is taken to be an origin of a unified coordinate system and additional spatial coordinates determined from various real-world features are registered to the HMD using the unified coordinate system with that position as the origin, accordingly, a registration is updated, e.g. continuously (e.g., at a certain refresh rate), to account for movements of an HMD throughout a surgical procedure that cause the position of the origin of the unified coordinate system in the physical space of a surgical environment to be changed, where movement of an HMD may be determined by a motion sensor or a change in a fixed position real-world feature (e.g. a real-world feature identified by or attached to a surgical table)); and in response to continually collecting the orientation data of the smart headset, automatically updating, by the smart headset in real-time, the at least one graphical element superimposed within the environment (Johnson, e.g. [0011] notes the augmented reality navigation system comprises a motion sensor (e.g. an inertial motion unit (IMU) (by way of example, but noted above the motion sensor may include a gyroscope)) connected to the head mounted display for outputting a motion signal based on measured motion of the head mounted display and wherein the instructions cause the processor to: update, by the processor, the relative position and orientation of the determined real-world features in the detector input signal based on motion detected by the motion sensor; and update, by the processor, the surgical tool augmentation graphics based on the updated relative position and orientation).
As to claim 6, Johnson et al. modified with Dorman disclose capturing, by an input device of the smart headset, additional environmental data of the environment (Johnson, e.g. as noted in claim 1, HMD may include various detectors), wherein the input device is at least one of a camera, sensor, or internet of things (IoT) device (Johnson, [0114] notes HMD 600 may include a detector (e.g. optical camera) 610 or a plurality of detectors 610, facing away from the wearer that outputs video and/or other images), wherein the additional environmental data comprises orientation data of a portion of a body (Johnson, [0142] notes the detector input signal (e.g. as in step 1202 of Figure 12) represents the field of view of one or more detectors that comprises at least a portion of a patient anatomy (e.g. relevant to a surgical procedure to be performed), the detector input signal may be received by a computer subsystem from one or more detectors mounted to the HMD of the augmented reality navigation system, one or more detectors mounted to a HMD of a second augmented reality navigation system, and/or one or more auxiliary detectors positioned throughout a surgical environment), wherein the orientation data of the portion of the body indicates at least one of an axial plane, coronal plane, or a sagittal plane associated with anatomy of the portion of the body (modified with Dorman, Figure 1, [0079] notes a sagittal or median plane 110, a frontal or coronal plane 120, and a horizontal or transverse place 130 relative to the patient’s body part 100 located at the intersection of the sagittal plane 110, the coronal plane 10, and the transverse plane 130); and determining, by the smart headset, an orientation of the portion of the body within the environment based on inputting the orientation data into a machine learning algorithm and receiving an output prediction indicating the orientation of the portion of the body within the environment (modified with Dorman, [0164] notes machine learning based or artificial intelligence-based system may be trained on data from historical placement of medical alignment devices, captured via the orientation systems and methods discussed above, and may automatically identify and suggest optimal or desired placement positions and orientations, where Figure 24, [0165] notes system 2400 for calculating and determining optimal or desired placement of posterior fixation placement, where apparatus such as apparatus 300 and 702, further illustrated in Figure 8 to be in communication with augmented reality or virtual reality device, e.g. headset 704, where Figure 27, further described the method 2700 for processing captured images for calculating and determining optimal placement of posterior fixation placement, e.g. step 2714, [0183] notes the neural network may process the image, e.g. received at step 2702, to identify a suggested position and orientation for the pedicle screw, and the computing device may provide an overlay to the image showing the suggested placement).
As to claim 7, Johnson et al. modified with Dorman disclose generating the at least one graphical element comprising the visual indicia for orienting the surgical tool at the desired location is further based on the orientation of the portion of the body within the environment (e.g. as noted in claim 1, Johnson, generating the augmentation graphics based on the position and/or orientation of real-world features, e.g. at least a surgical tool as detected via detector input signal; modified with Dorman, visual indicia as virtual representation of tool); and wherein the method further comprising: generating, by the smart headset, visual indicator elements indicating the orientation of the portion of the body within the environment (e.g. as noted in claim 1, Johnson, the augmentation graphics is generated based on the position and/or orientation of real-world features, further including a patient; modified with Dorman, the visual indicia indicates the orientation of the body, which the visual indicia is superimposed); and displaying, by the smart headset, the visual indicator elements superimposed within the environment (e.g. as noted in claim 1, Johnson, augmentation graphics displayed overlaid over the patient or in a virtual display screen; modified with Dorman, visual indicia superimposed over portion of patient, e.g. bone).
As to claim 8, Johnson et al. modified with Dorman disclose the surgical tool is one of a gear shift probe, a pedicle probe, a Jamshidi needle, an awl, a tap, a screw inserter, a drill, or a syringe (Johnson, [0053] notes pointer tool which may be inserted into a robotic surgical system, where a surgical tool may be used as a pointer tool, e.g. a drill bit, a drill guide, a tool guide, an awl, or similar surgical tool may be used as a pointer tool; modified with Dorman, [0118] notes tool such as gear shift probe, drill, and the like), and wherein the environmental data comprises planning data for performing an operation at the desired location using the surgical tool (Johnson, e.g. as noted in claim 1, determine a relative position and/or orientation for one or more real-world features, e.g. surgical table, patient, and/or surgical tool; modified with Dorman, [0124] notes planning of the insertion point or pilot hole and the proper angle for the surgical tool may be conducted with the aid of the virtual reality device), and wherein the method further comprising: receiving and storing, by the smart headset, diagnostic images of a portion of a body (Johnson, [0124] notes computer system 820 uses patient data from imaging equipment 830 to generate a two dimensional (2D) or three dimensional (3D) model, imaging equipment 830 may include x-ray equipment, endoscope cameras, magnetic resonance imaging equipment, computed tomography scanning equipment, three-dimensional ultrasound equipment, endoscopic equipment, and/or computer modeling equipment which can generate a multi-dimensional model of a targeted site of a patient, the patient data can include real-time feeds and/or earlier stored data from imaging equipment 830, and may include an anatomical database specific for the particular patient or more generally for humans; modified with Dorman, [0081] notes obtaining diagnostic images, such as from CT scans, Mill scans, X-rays, and the like of items of interest, such as a vertebra), wherein generating the at least one graphical element is further based on the diagnostic images of the portion of the body ([0124] notes computer subsystem 820 uses patient data from imaging equipment 830 to generate a two dimensional (2D) or three dimensional (3D) model, [0125] notes the computer subsystem 820 can use (i) present locations of HMD 100, surgical site 804, and surgical tool 800 and/or surgical apparatus 802 obtained by position tracking system 810 by detecting one or more real-world features and (ii) the real-world feature representations contained in a patient model to transform the patient model to a present perspective view of a wearer of HMD 100, [0126] further notes computer subsystem 820 may augmentation graphics representing a patient model, a surgical tool and/or a surgical apparatus on a display screen 110 of an HMD 100, [0129] further notes computer subsystem 820 may receive other data and video streams from a patient database and other electronic equipment, which can be selectively display on one or more display screen of HMD 100 using augmentation graphics; modified with Dorman, Figure 9, as part of step 802, further at step 804, [0128] notes acquiring a diagnostic representation of the bone, e.g. image from CT scan or MM scan, then further performing steps 805-807 based on the acquired diagnostic representation of the bone, e.g. step 805, aligning diagnostic representation of bone with reference point, step 806, designating insertion point of simulated surgical hardware installation on diagnostic representation of bone, and step 807, designating orientation of simulated surgical hardware installation on diagnostic representation of bone relative to reference point, then in step 803, using the augmented reality based electronic device to align an instrument for inserting a surgical hardware installation at a desired orientation through an insertion point of the bone by displaying visual indicia indicating the insertion point and the orientation of the simulated surgical hardware installation).
As to claim 9, Johnson et al. modified with Dorman disclose a method for orienting a surgical tool at a desired three-dimensional insertion angle at a desired location within an environment for use in installing a medical device by using and displaying at least one graphical element (e.g. as noted in claim 1, Johnson, Figure 12; modified with Dorman, Figure 9), the method comprising: determining, by one or more processors (Johnson, Figure 9 illustrates computer equipment 620 comprising at least general processor 626, graphics processing unit (GPU) 638, where [0109] notes components of computer equipment 620 illustrated as separate from HMD, but some or all may be reside within HMD, where these components are further in communication with each of a gesture sensor 602, motion sensor 604, and a plurality of detectors 610), the desired three-dimensional insertion angle of the surgical tool based on an orientation of the surgical tool (e.g. as noted in claim 1, Johnson, detecting input signal, which may include a trajectory of a desired position and/or orientation; modified with Dorman, determining the thee-dimensional insertion angle); collecting, by the one or more processors, environmental data of the surgical tool within the environment (e.g. as noted in claim 1, Johnson, collecting relative position and orientation of real world features, including surgical tool); generating, by the one or more processors, at least one graphical element comprising visual indicia for orienting the surgical tool at the desired location based on the desired three-dimensional insertion angle (e.g. as noted in claim 1, Johnson, generating augmentation graphics; modified with Dorman, generating visual indicia); and displaying, by the one or more processors, the at least one graphical element superimposed within the environment on a smart headset communicatively coupled to the one or more processors (e.g. as noted in claim 1, Johnson, displaying augmentation graphics; modified with Dorman, displaying visual indicia). Claim 9 is similar in scope to claim 1. Please see the rejection and rationale of claim 1 above.
Claims 12-14 are similar in scope to claims 4-6, respectively, and are therefore rejected under similar rationale.
Claim 15 is similar in scope to claims 7 and 8 combined, and is therefore rejected under similar rationale.
As to claim 16, Johnson et al. modified with Dorman disclose a smart headset for orienting a tool at a desired location within an environment, (e.g. as noted in claim 1, Johnson, HMD 100/600; modified with Dorman, augmented reality or virtual reality device 704, e.g. headset) the smart headset comprises: a transparent or opaque display (Johnson, Figure 1, [0093] notes HMD includes a semi-transparent display screen 110, also illustrated as display screen 608 of Figure 9); a plurality of sensor devices (Johnson, Figures 1 and 9, [0115], [0116], [0119] thru [0121] notes HMD may include at least a gesture sensor 602 and/or motion sensor 604, [0114] notes a plurality of detectors 610); and one or more processors (Johnson, Figure 9, e.g. at least general processor 626, graphics processing unit (GPU) 638, and display module 606, where [0109] notes components of computer equipment 620 illustrated as separate from HMD, but some or all may be reside within HMD, where these components are further in communication with each of a gesture sensor 602, motion sensor 604, and a plurality of detectors 610) configured to: initiate the smart headset to be calibrated to the environment so that the smart headset knows its position relative to the environment when the smart headset moves in the environment (e.g. as noted in claim 1, Johnson, initiating HMD to be calibrated); collect, via the plurality of sensor devices, environmental data of a surgical tool within the environment using physical elements, fiducial elements, or geometric shapes of the surgical tool that is located at the desired location (e.g. as noted in claim 1, Johnson, collecting relative position and orientation of real world features, including surgical tool via one or more detectors); calculate an orientation of the surgical tool based on collecting the physical elements, fiducial markers, or geometric shapes of the surgical tool (Johnson, [0054] notes real-world features may further include fiducials, which may be attached to, e.g. surgical equipment (e.g., an operating table), a patient, a surgical tool, an implant, a robotic surgical system, or an augmented reality navigation system (e.g., on the head mounted display), where a fiducial may comprise a plurality of markers to assist in orienting the fiducial in the environment during navigation (e.g., tracking, e.g. a plurality of spatially separated markers attached to a rigid holding apparatus that is attachable to an object, wherein each of the markers is configured to be detected by a detector disposed on the head mounted display (e.g., emit or alter an electromagnetic field for an EMF detector or have a certain reflectivity for an optical detector) the real-world features are used to determine position and orientation of objects in a surgical environment); receive a desired three-dimensional insertion angle (e.g. as noted in claim 1, Johnson, modified with Dorman, receiving a desired three-dimensional insertion angle); determine a position of the desired three-dimensional insertion angle at the desired location (e.g. as noted in claim 1, Johnson, modified with Dorman, determining position and/or orientation at the desired three-dimensional insertion angle); generate at least one graphical element comprising visual indicia for orienting the surgical tool at the desired three-dimensional insertion angle at the desired location (e.g. as noted in claim 1, Johnson, generating augmentation graphics; modified with Dorman, generating visual indicia); and display, via the transparent or opaque display, the at least one graphical element superimposed within the environment (e.g. as noted in claim 1, Johnson, displaying augmentation graphics; modified with Dorman, displaying visual indicia). Claim 16 is similar in scope to claim 1. Please see the rejection and rationale of claim 1 above.
Claims 19 and 20 are similar in scope to claims 4 and 5, respectively, and are therefore rejected under similar rationale.
Claim(s) 2, 3, 10, 11, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Johnson et al. (US 2019/0254754) in view of Dorman (US 2022/0237817) as applied to claims 1, 9, and 16 above, and further in view of Jones et al. (US 10,650,594).
As to claim 2, Johnson et al. modified with Dorman disclose the visual indicia comprises a virtual tool for orienting the surgical tool at the desired location and the desired three-dimensional insertion angle (e.g. as noted in claim 1, Johnson, augmentation graphics may display a surgical tool (or a portion thereof) that is not physically present and/or display a position of the surgical tool, thus a “virtual tool” (not physical); modified with Dorman, visual indicia may be a virtual representation of the tool), and wherein the visual indicia further comprise a three-dimensional vector comprising a guideline indicating a trajectory of the virtual tool (e.g. as noted in claim 1, Johnson, augmentation graphics may further display a surgical tool’s trajectory over a period of time such that a surgeon may watch how a tool will move along a trajectory during at least a portion of the procedure, thus comprising a “guideline;” modified with Dorman, visual indicia may be virtual representation of the tool 799 or may be an arrow, a line, or any other suitable visual representation, thus comprising a “guideline”), but do not disclose, but Jones et al. disclose wherein the method further comprising: generating, by the smart headset, interactive elements for interacting with the smart headset (column 24, lines 29-49 notes a plurality of visualization modes which a user can select among using head movement, hand gestures, voice commands, etc., the visualization modes control one or more of the following to be displayed: 1) bone; 2) skin; 3) muscle; 4) organ; 5) vessel; 6) virtual tool trajectory; and 7) cross sectional slice; visualizations may also cause the system to graphically render anatomical structure of the patient for display as wireframes, polygons, or smoothed surfaces, and which can be selectively displayed in monochrome or false colors); and displaying, by the smart headset, the interactive elements superimposed within the environment (see Figures 18-23, column 24, lines 50 thru column 26, lines 29).
It would have been obvious to one of ordinary skill in the art at the time of the invention to further modify Johnson et al. modified with Dorman’s system and method of generating and displaying graphical elements as visual indicia with Jones et al.’s method of generating and displaying interactive elements such that a user may have more control as desired, thus enhancing the functionality of the system (see column 24, lines 29-49 of Jones et al.).
As to claim 3, Johnson et al. modified with Dorman and Jones et al. disclose receiving, by an input device of the smart headset, an instruction from an individual operating the smart headset (Johnson, Figure 9, [0119] thru [0121] notes HMD 600 may include a gesture sensor 602 to sense a gesture made by a user, e.g. as a gesture-based command, and detector 610 or another camera directed toward one of the user’s eyes to identify blinking or movements of the eye to generate a command from the user to control what is displayed, and [0122] and [0123] notes HMD 600 may include a microphone to receive voice commands from a user; modified with Jones, Figure 9, column 10, lines 10-67 notes HMD 600 may include a gesture sensor to sense a gesture made by a user, e.g. as a gesture-based command, and detector 610 or another camera directed toward one of the user’s eyes to identify blinking or movements of the eye to generate a command from the user to control what is displayed, and column 11, lines 1-8 notes HMD 600 may include a microphone to receive voice commands from a user); locking, by the smart headset, the virtual tool superimposed within the environment, wherein the virtual tool is stationary at the desired location and the desired three-dimensional insertion angle as the smart headset changes positions within the environment (further modified with Jones, e.g. Figures 18-21, e.g. column 23, lines 51-56 notes graphical images generated on HMD 100 that shows at least a virtual trajectory 1312 of a drill bit 1310 extending from a drill or other surgical tool (e.g. “virtual tool”) into a patient’s anatomical bone model 1302 and other selected intervening structure, and which is overlaid at a patient site 1300, where column 13, lines 28-63 notes display screen 608 may be controlled to display one or more virtual display panels at a time, where the user may input a command, e.g. “lock,” which causes whichever virtual display panel that is presently most closely spaced to the user’s line-of-sight to be displayed full screen and held statically in-place not responding to head movement, the virtual display panel may remain statically locked as-displayed until the user deactivates the command, via, e.g. another command, e.g. “unlock,” where it may be considered a “virtual display panel” may illustrate the graphical images as shown in Figures 18-21, comprising the virtual tool); and wherein the instruction from the individual is at least one of an eye movement, a gesture, an auditory pattern, a movement pattern, haptic feedback, a biometric input, intangible feedback, or a preconfigured interaction (Johnson, modified with Dorman, further modified with Jones, e.g. as noted above, at least eye movements, gestures, and voice (auditory) patterns).
Claims 10 and 17 are similar in scope to claim 2, and are therefore rejected under similar rationale, where Johnson et al. modified with Dorman and Jones et al. further disclose the one or more processors are enclosed within the smart headset (see claims 9 and 16).
Claims 11 and 18 are similar in scope to claim 3, and are therefore rejected under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Amanatullah (US 2019/0231432) disclose a system and method for augmenting a surgical field with virtual guidance and tracking and adapting to deviation from a surgical plan.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACINTA M CRAWFORD whose telephone number is (571)270-1539. The examiner can normally be reached 8:30a.m. to 4:30p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACINTA M CRAWFORD/Primary Examiner, Art Unit 2617