DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4-9, 11-14, and 16-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental processes) without significantly more. Claim 1 recites:
An apparatus, comprising: (this falls within the statutory categories of invention)
at least one processor configured to: (generic computer components being invoked merely as tools to carry out the claimed tasks in a manner equivalent to mere instructions to apply an exception as per MPEP 2106.05(f))
obtain images of a medical environment; (insignificant extra-solution activity in the form of mere data gathering as per MPEP 2106.05(g). Note that no detail is provided on how the images are captured or what the process is for obtaining the images, rendering this step only tangentially related to the invention)
determine, based on all or a first subset of the images, a patient model that indicates a location and a shape of an anatomical structure of a patient in the medical environment; (a person can do this mentally by (1) observing the images, and (2) evaluating and judging them to imagine the relative positions of structures, tools, and the patient in the environment. Notably, this is how surgery historically has been accomplished prior to the availability of technical aids – the surgeon must mentally evaluate the surgical environment and the patient in order to formulate the next steps of the surgery)
determine, based on all or a second subset of the images, an environment model that indicates a three-dimensional (3D) spatial layout of the medical environment; and (as above, a surgeon accomplishes this mentally by observing the images and the using their imagination to evaluate positions of tools. For example, a surgeon memorizing the surgical setup he is operating in can grasp a tool from a tray while working on a patient without needing to shift his gaze from the patient – and the mentally done spatial recognition the surgeon does to accomplish this fall within the scope of the claim here)
devise, based on the patient model and the environment model, a surgical plan associated with the patient, (a surgeon would do this by, after observing the images and evaluating them with their professional expertise, mentally judging the optimal surgical plan to follow prior to carrying it out)
wherein the surgical plan indicates at least a movement path of a medical device towards the anatomical structure of the patient. (This would include, for example, a surgeon evaluating the best angle to use a scalpel at when making an incision based on the position of a patient and where the surrounding medical benches and tools are. This is well within the capabilities of a person to conduct mentally)
This judicial exception is not integrated into a practical application. In particular, the claim only recites the following additional elements: 1) mere instructions to apply the exception using generic computer components (the processor), 2) generally linking the use of the exception to the technical field of surgery, and 3) insignificant extra-solution activity in the form of mere data gathering (obtaining images). The processor is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of executing instructions and storing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception cannot integrate a judicial exception into a practical application. The specification that a data is gathered is only tangentially linked to the invention, and does not meaningfully limit the claim. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the claimed steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. The addition of insignificant extra-solution activity does not amount to an inventive concept. The claim is not patent eligible.
Claim 2 recites that body shape and pose are modelled, and a surgeon can easily consider these mentally as outline above.
Claim 3 is eligible, as a person cannot mentally manipulate a 3D mesh that conveys body pose and shape – this is too complex for a person to reasonably accomplish mentally or even with aid of pencil and paper.
Claim 4 notes that the structure is an organ, which a person can easily consider mentally as outlined above.
Claim 5 requires that locations of objects and people are modelled, which a person can easily consider mentally as outlined above (contours are recited in the alternative, though they also could be considered mentally)
Claim 6 requires a patient and medical professional be modeled and distinguished, which a person can easily consider mentally as outlined above.
Claim 7 requires “respective 3D representations” of the objects be included in the model, which a person can easily consider mentally as outlined above (a surgeon imagining a scalpel’s position relative to a patient’s body is mentally manipulating at least two 3D representations of objects).
Claim 8 requires that the model be determined “based on a machine-learning model”. This is mere instructions to apply an exception with a generic machine-learning model, as per MPEP 2106.05(f). No details are given as to how the machine-learning model operates or is used, meaning the claim fails to recite details of how a solution to a problem is accomplished.
Claim 9 recites presenting the plan on a display device, but no detail is given as to how this display is accomplished or what it looks like, meaning this amount merely to insignificant extra-solution activity in the form of selecting a particular data source or type of data to be manipulated as per MPEP 2106.05(g), notably similar to example iii. “Selecting information, based on types of information and availability of information [[…]], for collection, analysis and display”. It further recites receiving feedback regarding the plan, which (assuming the feedback is from a user) is merely insignificant extra-solution activity in the form of mere data gathering in a manner similar to the previously recited step of obtaining images. Finally, it recites modifying the surgical plan based on the feedback, which a person can easily accomplish mentally by updating their plans after evaluating new information received as feedback.
Claim 10 is eligible as it provides details on how the display is accomplished.
Claim 11 recites further details related to the insignificant extra-solution activity of obtaining images (gathering more images), and that the models are updated based on the new images, which a person can easily accomplish mentally by observation and evaluation.
Claim 12 requires that the images include depth images that indicate distances, but this as above is just further details related to the insignificant extra-solution activity of obtaining images (gathering more images).
Claim 13 recites that the medical device includes a surgical robotic arm. This is only used in the context of labelling what tool is having its movement path indicated in the surgical plan, and a surgeon can easily plan the motion of a robotic arm mentally (if not execute that movement mentally).
Claims 14 and 16-20 are substantially similar to claims 1, 5, 6, 7, 9, and 11 respectively, and are rejected for the same reasons provided above for those claims.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Steines (US 20220287676 A1).
Regarding Claim 1, Steines teaches:
at least one processor configured to: (¶4 comprising one or more computer processors)
obtain images of a medical environment; (Figs. 6A and 6B; ¶50 imaging data acquisition associated with a patient comprising … obtain real-time tracking information of one or more components of the imaging system)
determine, based on all or a first subset of the images, a patient model that indicates a location and a shape of an anatomical structure of a patient in the medical environment; (¶207 3D models of the patient; ¶52 the imaging system is configured to acquire 2D, 3D, or 2D and 3D imaging data of the patient within the 3D representation of the surface, volume or combination thereof; ¶447 the 3D stereoscopic view or augmented view of the 3D representation 1700 of the surface or volume ... superimposed, by the HMD or other augmented reality display device, onto the patient (prior to the actual imaging data acquisition) can move in relation with the ... tracked one or more components of the imaging system 1660 1670, tracked patient table 1690, and/or, optionally, the tracked anatomic target structure, such as a spine 1710 (e.g. optionally with one or more attached markers or fiducials or fiducial arrays, not shown).)
determine, based on all or a second subset of the images, an environment model that indicates a three-dimensional (3D) spatial layout of the medical environment; and (¶207 3D models or 3D or 2D graphical representations of tools or instruments; ¶52 the imaging system is configured to acquire 2D, 3D, or 2D and 3D imaging data of the patient within the 3D representation of the surface, volume or combination thereof; Figs. 6A and 6B; ¶447 one or more of its components, and/or the patient table 1690 can be tracked, e.g. using any of the techniques described throughout the specification.)
devise, based on the patient model and the environment model, a surgical plan associated with the patient, (¶476 The pre-operative data 16 or live data 18 including intra-operative measurements or combinations thereof can be used to develop, generate or modify a virtual surgical plan 24. The virtual surgical plan 24 can be registered in the common coordinate system 15.)
wherein the surgical plan indicates at least a movement path of a medical device towards the anatomical structure of the patient. (¶476 e.g. to display the next predetermined bone cut, e.g. from a virtual surgical plan or an imaging study or intra-operative measurements, which can trigger the HMDs 11, 12, 13, 14 to project digital holograms of the next surgical step 34 superimposed onto and aligned with the surgical site in a predetermined position and/or orientation.; ¶495 The virtual drills or pins 344 in this example can be an outline or a projected path of the physical pins or drills that can be used to fixate a physical proximal tibial cut guide to the proximal tibia.; ¶501 A vector corresponding to the intended path of the surgical instrument(s); ¶503 A virtual path 996 can be displayed for guiding the placement of the one or more physical pedicle screws. A computer processor can be configured to move, place, size, and align virtual pedicle screws 1000 using, for example, gesture recognition or voice commands, and, optionally to display magnified views 1003, e.g. from a CT scan, demonstrating the pedicle 1006 including the medial wall of the pedicle 1009. A target placement location 1012 for the virtual pedicle screw 1000 can also be shown. The virtual screw can be adjusted to be placed in the center of the pedicle. The physical screw and/or awl or screw driver can be tracked, e.g. using a navigation system or video system (for example with navigation markers or optical markers or direct optical tracking). When the screw path, awl path or screw driver path extends beyond the medial wall of the pedicle, a computer processor can generate an alarm, e.g. via color coding or acoustic signals. Physical instruments, e.g. a physical awl 1015, can be aligned with and superimposed onto the virtual path 996 projected by an HMD.)
Regarding Claim 2, Steines teaches:
wherein the patient model further indicates a body shape and a pose of the patient. (¶187 current pose data, for example for detected markers, e.g. on a patient … The current pose data 1230 for tracked items, e.g. a patient, an anatomic structure; ¶478 When images of the patient are superimposed onto live data seen through the optical head mounted display, in many embodiments image segmentation can be desirable. Any known algorithm in the art can be used for this purpose, for example ... active shape models)
Regarding Claim 3, Steines teaches:
wherein the patient model includes a 3D human mesh that indicates the body shape and pose of the patient, and (¶260 spatial maps can consist of triangular meshes built from each HMD's depth sensor information … the different meshes can be combined into a combined, more accurate mesh using; ¶417 the 3D representation can comprise a 2D outline, 3D outline, a mesh, a group of surface points, or a combination thereof at least in part derived from or based on the information about the geometry of the one or more components of the imaging system; Figs. 6A and 6B; see citations of Steines provided in the mapping of claim 2 regarding body shape and pose)
wherein the patient model further includes a 3D representation of the anatomical structure that indicates the location and shape of the anatomical structure. ( ¶447 the 3D stereoscopic view or augmented view of the 3D representation 1700 of the surface or volume ... superimposed, by the HMD or other augmented reality display device, onto the patient (prior to the actual imaging data acquisition) can move in relation with the ... tracked one or more components of the imaging system 1660 1670, tracked patient table 1690, and/or, optionally, the tracked anatomic target structure, such as a spine 1710 (e.g. optionally with one or more attached markers or fiducials or fiducial arrays, not shown).; ¶478 When images of the patient are superimposed onto live data seen through the optical head mounted display, in many embodiments image segmentation can be desirable. Any known algorithm in the art can be used for this purpose, for example ... active shape models)
Regarding Claim 4, Steines teaches:
wherein the anatomical structure includes an organ of the patient. (¶447 superimposed onto the patient 1650 and/or a target organ or target anatomic structure, such as a portion of a spine 1710 in this non-limiting example)
Regarding Claim 5, Steines teaches:
wherein the 3D spatial layout of the medical environment indicated by the environment model includes respective locations or contours of one or more objects [[...]] in the medical environment, the one or more objects including the medical device. (¶333 for example by detecting a shape, contour and/or outline, optional identification of the stylus, tool, instrument or combination thereof used, and optional use of known shape data and/or dimensions of the stylus, tool, instrument or combination thereof.)
includes respective locations or contours of one or more people in the medical environment ( ¶447 the 3D stereoscopic view or augmented view of the 3D representation 1700 of the surface or volume ... superimposed, by the HMD or other augmented reality display device, onto the patient; ¶52 the imaging system is configured to acquire 2D, 3D, or 2D and 3D imaging data of the patient within the 3D representation of the surface, volume or combination thereof; Figs. 6A and 6B)
Regarding Claim 6, Steines teaches:
wherein the one or more people include the patient and at least one medical professional, and (¶52 the imaging system is configured to acquire 2D, 3D, or 2D and 3D imaging data of the patient within the 3D representation of the surface, volume or combination thereof; Figs. 6A and 6B; ¶480 one or more cameras integrated or attached to the HMD can capture the movement of the surgeon's finger(s) in relationship to the touch area; using gesture tracking software, the virtual object/virtual plane can then be moved by advancing the finger towards the touch area in a desired direction. The movement of the virtual object/virtual plane via the user interaction, e.g. with gesture recognition, gaze tracking, pointer tracking etc., can be used to generate a command by a computer processor. The command can trigger a corresponding movement of one or more components of a surgical robot and/or an imaging system.)
wherein the environment model includes information that distinguishes the patient from the at least one medical professional. (¶480 one or more cameras integrated or attached to the HMD can capture the movement of the surgeon's finger(s) in relationship to the touch area; using gesture tracking software, the virtual object/virtual plane can then be moved by advancing the finger towards the touch area in a desired direction. The movement of the virtual object/virtual plane via the user interaction, e.g. with gesture recognition, gaze tracking, pointer tracking etc., can be used to generate a command by a computer processor. The command can trigger a corresponding movement of one or more components of a surgical robot and/or an imaging system.; examiner notes that the system is distinguishing the surgeon from the patient by keying actions specifically to gesture recognition of the surgeon.)
Regarding Claim 7, Steines teaches:
wherein the environment model includes respective 3D representations of the one or more objects in the medical environment, and (¶333 for example by detecting a shape, contour and/or outline, optional identification of the stylus, tool, instrument or combination thereof used, and optional use of known shape data and/or dimensions of the stylus, tool, instrument or combination thereof.)
wherein the respective locations and shapes of the one or more objects are indicated by the 3D representations. (¶503 A virtual path 996 can be displayed for guiding the placement of the one or more physical pedicle screws. A computer processor can be configured to move, place, size, and align virtual pedicle screws 1000 using, for example, gesture recognition or voice commands, and, optionally to display magnified views 1003, e.g. from a CT scan, demonstrating the pedicle 1006 including the medial wall of the pedicle 1009. A target placement location 1012 for the virtual pedicle screw 1000 can also be shown. The virtual screw can be adjusted to be placed in the center of the pedicle. The physical screw and/or awl or screw driver can be tracked, e.g. using a navigation system or video system (for example with navigation markers or optical markers or direct optical tracking). When the screw path, awl path or screw driver path extends beyond the medial wall of the pedicle, a computer processor can generate an alarm, e.g. via color coding or acoustic signals. Physical instruments, e.g. a physical awl 1015, can be aligned with and superimposed onto the virtual path 996 projected by an HMD.)
Regarding Claim 8, Steines teaches:
wherein at least one of the patient model or the environment model is determined based on a machine-learning model. (¶265 using image processing and/or pattern recognition and/or an artificial neural network)
Regarding Claim 9, Steines teaches:
present the surgical plan on a display device; (¶446 head mounted display (HMD) 1610. The head mounted display or other augmented reality display device can generate a virtual display of one or more virtual objects within a field of view 1620 of the HMD 1610 or other augmented reality display device. A virtual display can comprise a 3D representation 1630 of a surface or volume; ¶476 The virtual surgical plan 24 can be registered in the common coordinate system 15. The HMDs 11, 12, 13, 14 or other augmented reality display systems can project digital holograms of the virtual data or virtual data into the view of the left eye using the view position and orientation of the left eye 26 and can project digital holograms of the virtual data or virtual data into the view of the right eye using the view position and orientation of the right eye 28 of each user, resulting in a shared digital holographic experience 30. Using a virtual or other interface, the surgeon wearing HMD 1 11 can execute commands 32, e.g. to display the next predetermined bone cut, e.g. from a virtual surgical plan or an imaging study or intra-operative measurements, which can trigger the HMDs 11, 12, 13, 14 to project digital holograms of the next surgical step 34 superimposed onto and aligned with the surgical site in a predetermined position and/or orientation.)
receive a feedback regarding the surgical plan; and modify the surgical plan based on the feedback. (¶476 The pre-operative data 16 or live data 18 including intra-operative measurements or combinations thereof can be used to develop, generate or modify a virtual surgical plan 24.; ¶482 Using the touch area or other virtual interface, the surgeon can move the virtual object, e.g. arbitrary virtual plane, into a desired position, orientation and/or alignment; see example 5 which begins on ¶486 and is titled "Use of One or More/Multiple HMDs or Other Augmented Reality Display Systems for Modifying Virtual Surgical Plans and/or for Operating a Surgical Robot or an Imaging System"; ¶493 The interaction can comprise moving the virtual object, e.g. the virtual surgical guide, for example using a tracked pointer, a gesture recognition, a finger tracking, an object tracking, etc. The interaction can be used to generate, by a computer processor, a command which can be configured, for example, to initiate, start, stop, activate, de-activate, move, adjust the position/orientation of the virtual object, e.g. virtual surgical guide, and, optionally correspondingly, one or more components, controllers, drivers, motors, sensors, relays, processors or combination thereof of a surgical robot and/or an imaging system.)
Regarding Claim 10, Steines teaches:
the at least one processor being configured to present a graphical representation of the anatomical structure of the patient and the movement path of the medical device on the display device. (¶495 The virtual drills or pins 344 in this example can be an outline or a projected path of the physical pins or drills that can be used to fixate a physical proximal tibial cut guide to the proximal tibia.; ¶501 A vector corresponding to the intended path of the surgical instrument(s); ¶503 A virtual path 996 can be displayed for guiding the placement of the one or more physical pedicle screws. A computer processor can be configured to move, place, size, and align virtual pedicle screws 1000 using, for example, gesture recognition or voice commands, and, optionally to display magnified views 1003, e.g. from a CT scan, demonstrating the pedicle 1006 including the medial wall of the pedicle 1009. A target placement location 1012 for the virtual pedicle screw 1000 can also be shown. The virtual screw can be adjusted to be placed in the center of the pedicle. The physical screw and/or awl or screw driver can be tracked, e.g. using a navigation system or video system (for example with navigation markers or optical markers or direct optical tracking). When the screw path, awl path or screw driver path extends beyond the medial wall of the pedicle, a computer processor can generate an alarm, e.g. via color coding or acoustic signals. Physical instruments, e.g. a physical awl 1015, can be aligned with and superimposed onto the virtual path 996 projected by an HMD.)
Regarding Claim 11, Steines teaches:
obtain additional images of the medical environment that indicate a change associated with the patient or the 3D layout of the medical environment, (¶50 obtain real-time tracking information of one or more components of the imaging system, wherein the at least one computer processor is configured to generate a 3D representation of a surface, a volume or combination thereof, wherein the 3D representation of the surface, volume or combination thereof is at least in part derived from information about a geometry of the one or more components)
the at least one processor further configured to update at least one of the patient model or the environment model based on the additional images of the medical environment. (¶50 wherein the position and orientation of the augmented view is updated based on the real time tracking information of the one or more components of the imaging system.)
Regarding Claim 12, Steines teaches:
wherein the images of the medical environment include at least one depth image that indicates respective distances of one or more objects in the medical environment from a view point towards the medical environment. (Figs. 6A and 6B; ¶46 a depth camera; ¶187 the system can comprise, for example, a tracking camera or tracking system 1170 (e.g. an optical tracking system, an electromagnetic tracking system, one or more visible light and/or infrared cameras, video systems, scanners, e.g. 3D scanners, laser scanners, LIDAR systems, depth sensors);)
Regarding Claim 13, Steines teaches:
wherein the medical device includes a surgical robotic arm. (¶122 robotic arm. In some embodiments, a robotic end effector can be configured to effect a tissue removal or tissue alteration in the patient.; ¶181 one or more markers can be attached to or integrated into a physical instrument, a physical tool, a physical trial implant, a physical implant, a physical device, one or more HMDs or other augmented reality display systems, a robot, a robotic arm; ¶257 tracking information of one or more HMDs or other augmented reality display systems, a patient, an anatomic structure of a patient, one or more physical surgical tools, one or more physical surgical instruments, one or more robot (e.g. a robot with a robotic arm)
Regarding Claims 14-20:
Claims 14-20 are substantively similar to claims 1, 3, 5, 6, 7, 9, and 11 respectively, and are equally rejected under the grounds presented above for those claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20190231432 A1 and US 20200405398 A1 disclose updating and adapting a surgical plan that is providing virtual guidance based on new imagery and the progress of an active surgery (done intra-operatively). US 20180008351 A1 discusses registering a surgical navigation system that is used to guide a surgeon through a surgical plan to a patient’s anatomy based on imagery. US 20180325526 A1 discusses customizing a surgical plan using patient specific data.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIJAN MAPAR whose telephone number is (571)270-3674. The examiner can normally be reached Monday - Thursday, 11:00-8:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rehana Perveen can be reached at 571-272-3676. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BIJAN MAPAR/ Primary Examiner, Art Unit 2189