DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as
set forth in 37 CFR 1.136(a).
Claims 1-20 are pending in this application. Claims 1-20 have been examined on the merits.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 (and similarly claim 12) recite the limitation “placing a surgical instrument at a location”. It is unclear what the location is and the placement of the surgical instrument (i.e. whether it is simply put somewhere, inserted, etc.). For purposes of examination, the limitation will be construed as any surgical instrument which is located in the area of interest. However, further clarification is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites receiving a plurality of two-dimensional images, generating, from the plurality of two-dimensional images, a three-dimensional reconstructed model, generating a model boundary, receiving an interoperative image, generating a live anatomy boundary, matching digital samples, and placing a surgical instrument at a location. The limitation of receiving, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind i.e. visually “receiving” the plurality of two-dimensional images, receiving an interoperative image. Further, (although it is not required) if these processes are performed by an “inherent” processor, these claimed steps could easily be performed by a generic computer component as the claimed limitations do not require any specialized processor. Similarly, generating a three-dimensional reconstructed model, a model boundary, and a live anatomy boundary, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind i.e. “generating” the boundaries and model using pen and paper of i.e. the boundaries or the model. Similarly, “matching” the digital samples, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind i.e. “matching” or aligning images by visually observing them and mentally matching the digital samples together. Similarly, “placing” a surgical instrument, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind i.e. “placing” an instrument visually into a particular location based on an image. If a Claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only
recites one additional element — using one or more generic processors for execution. The processors are recited at a high-level of generality (i.e., an augmented reality headset; at least one processor; and at least one non-transitory computer-readable storage medium including instructions) such that it amounts no more than mere instructions to apply the exception using processors. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using processors to perform the identifying and determining steps amounts to no more than mere instructions to apply the exception using generic processors. Mere instructions to apply an exception using generic processors cannot provide an inventive concept. The claim is not patent eligible. The other independent claim 12 also recites similar limitations as claim 1, which is also found to be not patent eligible at least for the reasons noted above.
The dependent claims 2-11 and 13-20 are also directed to an abstract idea as the depending claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The elements in those claims do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Therefore, the depending claims, are, also not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3 and 5-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jagadeesan (WO 2020131880 A1).
Regarding Claim 1, Jagadeesan teaches a surgical navigation method (corresponding disclosure in at least [0003], where the system is a trackerless navigation system “The present disclosure relates generally to systems and methods for a trackerless navigation. The systems and methods provided herein have wide applicability, for example, including guided medical, such as surgical”) comprising: receiving a plurality of two-dimensional images of at least a portion of a body of a patient (Corresponding disclosure in at least [0253], where the system receives multiple 2D images “FIG. 22A shows samples of 2D input images taken using a stereoscopic camera during neurosurgery,”);
generating, from the plurality of two-dimensional images, a three-dimensional reconstructed model of the at least the portion of the body (corresponding disclosure in at least [00253] and Figure 22B, where the 2D images are used to reconstruct a tissue surface “FIG. 22B shows reconstruction results of tissue surface viewed from a first angle”);
PNG
media_image1.png
368
643
media_image1.png
Greyscale
Figure 22A and Figure 22B of Jagadeesan
generating a model boundary in the three-dimensional reconstructed model based on a section of interest (corresponding disclosure in at least [00254], where a section of interest (i.e. the tumor and the kidney) are reconstructed as a 3D model and the boundaries are clearly defined “The surface of the kidney and a tumor are shown in the images…FIG. 23B shows reconstruction results of kidney surface viewed from a first angle ”);
receiving an intraoperative image of the at least the portion of the body (corresponding disclosure in at least [00251], where an intraoperative image of a part of the body is received “obtained intraoperative stereo microscope images during a neurosurgery case”);
generating a live anatomy boundary based on the intraoperative image (corresponding disclosure in at least [00192], where the images taken from the interoperative imaging are used during surgery for a reconstruction, which would generate a boundary of the anatomy that is being viewed “The reconstruction results were generated using video frames including the 2D input images shown in FIG. 23A. FIG. 23C shows the reconstruction results of kidney surface viewed from a second angle … In this experiment, we simply set the pose threshold to determine key frames to a small number hence all five images were used as key frames. Such a high-resolution mosaicking of the neurosurgery cavity could conceivably be used to register the intraoperative or diagnostic MRI to the mosaicked stereo reconstruction of the surgical cavity to identify remnant brain tumor during surgery”);
and matching digital samples from within the model boundary with digital samples from within the live anatomy boundary to register the three-dimensional reconstructed model with the at least the portion of the body (corresponding disclosure in at least [00246], where the 3D model is registered with the digital sample (CT image) of the anatomy body “we used the CT imaging of tissues as the gold standard. In this experiment…To quantify accuracy, we registered the 3D reconstructed model with the CT segmentation results by first manually selecting landmarks, such as tissue tips, edge points and other recognizable points, and then refining the registration with the ICP algorithm.”);
and placing a surgical instrument at a location based on the registration of the three- dimensional reconstructed model with the at least the portion of the body (corresponding disclosure in at least [00283], where there is a surgical instrument that is placed in a location (there is deformation caused in the image, which is stated to be caused by an instrument) “Fig. 27C shows a set 558 of input video frames of Hamlyn in vivo data with deformation caused by instrument interaction along with a corresponding set 560 of deformed templates including dots representing control points”).
Regarding Claim 2, Jagadeesan further teaches an at least one non-transitory computer-readable storage medium including instructions that, when executed by at least one processor, configure the at least one processor to perform the method of claim 1 (corresponding disclosure in at least [00108], where a processor and memory (i.e. storage medium) are disclosed “The computational system 28 can include at least one processor and at least one memory” that execute the methods disclosed).
Regarding Claim 3, Jagadeesan further teaches a method wherein said matching of the digital samples from within the model boundary with the digital samples from within the live anatomy boundary occurs without a fiducial, a tracker, an optical code, a tag, or a combination thereof (corresponding disclosure in at least [0003], where the system is a trackerless navigation system “The present disclosure relates generally to systems and methods for a trackerless navigation”, and further [00266], where the template (i.e. digital sample) is matched with the input video (model boundary) “Hence, the ability to match the template and the input video when non-rigid deformation exists is essential for intraoperative use of deformation recovery methods”).
Regarding Claim 5, Jagadeesan further teaches a method wherein the model boundary is utilized by a medical provider during a pretreatment process, a preoperative process, an intraoperative process, a postoperative process, or a combination thereof of a medical procedure (corresponding disclosure in at least [00290], where the medical practitioner can view the visual representation, or the 3D model, which aids in the process “The visual representation can include a three-dimensional model… A medical practitioner may view the visual representation on the display”).
Regarding Claim 6, Jagadeesan further teaches a method wherein the matching of the digital samples from within the model boundary with the digital samples from within the live anatomy boundary is performed by utilizing: an iterative closest point algorithm; a machine-learned model for matching one or more patterns of the digital samples from within the model boundary to one or more patterns of the digital samples from within the live anatomy boundary; or a combination thereof (corresponding disclosure in at least [00264], where it’s disclosed iterative closest point method (ICP) is used for the model boundary matching “one group proposed a monocular vision-based method that first generated the tissue template and then estimated the template deformation by matching the texture and boundaries with a non-rigid iterative closet points (ICP) method”).
Regarding Claim 7, Jagadeesan further teaches a method wherein the model boundary comprises a two- dimensional area, the two-dimensional area being defined by one or more of geometric shapes, and the one or more geometric shapes comprising a line, a regular polygon, an irregular polygon, a circle, a partial circle, an ellipse, a parabola, a hyperbola, a logarithmic- function curve, an exponential-function curve, a convex curve, a polynomial-function curve, or a combination thereof (corresponding disclosure in at least [00192], where the method used identifies varying shapes and Figure 22B, where the model, when viewed in 2D, would translate into a combination of geometric shapes “Another class of effective methods is based on support window methods, such as PatchMatch, which uses varying shape of the matching window”).
PNG
media_image2.png
406
420
media_image2.png
Greyscale
Figure 22B of Jagadeesan
Regarding Claim 8, Jagadeesan further teaches wherein the model boundary comprises a three- dimensional volumetric region, the three-dimensional volumetric region being defined by a cuboid, a polyhedron, a cylinder, a sphere, a cone, a pyramid, a prism, a torus, or a combination thereof (corresponding disclosure in at least [00192], where the method used identifies varying shapes “Another class of effective methods is based on support window methods, such as PatchMatch, which uses varying shape of the matching window”, and Figure 22B, where the 3D region is a combination of volumetric shapes).
Regarding Claim 9, Jagadeesan further teaches wherein the model boundary comprises a surface with a relief (corresponding disclosure in at least [00266], where relief (i.e. deformations in the surface) is recovered in the model boundary “To date, most deformation recovery methods are based on the non-rigid ICP alignment to obtain matching information between the template and the current input, such as monocular/stereo videos or 3D point clouds from RGB-D sensors”).
Regarding Claim 10, Jagadeesan further teaches wherein the model boundary comprises a shape, the shape being drawn by a medical professional (corresponding disclosure in at least [00245], where software is used for segmenting tissue models, which would comprise a shape being drawn, “we used the 3D Sheer software to segment the tissue models from the CT images,. ... To quantify accuracy, we registered the 3D reconstructed model with the CT segmentation results by first manually selecting landmarks, such as tissue tips, edge points and other recognizable points”).
Regarding Claim 11, Jagadeesan further teaches wherein the live anatomy boundary comprises approximately a same size, shape, form, location on the portion of the body, or a combination thereof as the model boundary (corresponding disclosure in at least [00250], where an intraoperative (live anatomy boundary) is obtained “To further evaluate the performance of the 3D reconstruction method 500 in real-world surgical scenarios, we obtained intraoperative videos from various stereo imaging modalities during surgeries performed in a hospital and online videos”, then [00252], where samples are taken in the region “Fig. 22A shows samples of 2D input images taken using a stereoscopic camera during neurosurgery, and only images from the left camera of the steroescopic camera are shown. Fig. 22B shows reconstruction results of tissue surface viewed from a first angle”, and further [00254], where the sample and intraoperative data are registered, where both are in the same area “In this experiment, we simply set the pose threshold to determine key frames to a small number hence all five images were used as key frames. Such a high- resolution mosaicking of the neurosurgery cavity could conceivably be used to register the intraoperative or diagnostic MRI to the mosaicked stereo reconstruction of the surgical cavity to identify remnant brain tumor during surgery”; they would approximately have the same size and be in the same location for purposes of registration).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Jagadeesan (WO 2020131880 A1) in view of Casas (US 20200336721 A1)
Regarding Claim 4, Jagadeesan teaches all of the limitations of Claim 1.
Jagadeesan does not teach wherein said receiving of the intraoperative image comprises obtaining the intraoperative image using an augmented reality device during a medical procedure.
Specifically, Casas, in a similar field of endeavor, teaches wherein said receiving of the intraoperative image comprises obtaining the intraoperative image using an augmented reality device during a medical procedure (corresponding disclosure in at least [0007], where real-time (intraoperative) images are viewed using augmented reality (AR) “Embodiments disclosed here describe a real-time surgery navigation method and apparatus for displaying an augmented view of the patient from the preferred static or dynamic viewpoint of the surgeon… Responsive to registering the images, a head mounted display may present to a surgeon an augmented view of the patient, wherein the augmented reality is presented via a head mounted display.”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Jagadeesan to use AR for receiving intraoperative images as taught by Casas. One of the ordinary skill in the art would have been motivated to incorporate the AR device for obtaining interoperative images during medical procedures because it provides a method of viewing the combined real-time preoperative images with virtual graphics associated with the preoperative images with the user being able to freely move around the operating room (corresponding disclosure in at least [0006] and [0012] of Casas).
Regarding Claim 12,
Jagadeesan teaches a system for aiding a medical provider during a medical procedure (corresponding disclosure in at least [0003], where the system is intended for medical procedures “The systems and methods provided herein have wide applicability, for example, including guided medical, such as surgical, or industrial or other exploratory procedures”) comprising:
at least one processor (corresponding disclosure in at least [00108], where “the computational system 28 can include at least one processor”);
at least one non-transitory computer-readable storage medium including instructions that, when executed by the at least one processor (corresponding disclosure in at least [00108], where there is at least one processor for execution and memory (storage medium) “The computational system 28 can include at least one processor and at least one memory, and may be a controller, desktop computer, laptop, tablet computer, smartphone, or other suitable computational device”), cause the system to:
receive an indication of a live anatomy boundary for an intraoperative scene (corresponding disclosure in at least [00254], where the boundary can be determined via registration during the surgery “In this experiment, we simply set the pose threshold to determine key frames to a small number hence all five images were used as key frames. Such a high- resolution mosaicking of the neurosurgery cavity could conceivably be used to register the intraoperative or diagnostic MRI to the mosaicked stereo reconstruction of the surgical cavity to identify remnant brain tumor during surgery”);
receive an indication of an alignment of the live anatomy boundary with a section of interest of at least a portion of a body (corresponding disclosure in at least [00187], where alignment is completed of the model “To solve the contradiction between the accuracy of 3D reconstruction and registration, we propose to scan the tissue surface at a close distance and perform stereo matching on the acquired stereo images, then mosaick the 3D models at different time steps according to model alignment obtained by simultaneously localization and mapping (SLAM)”);
and match a section of a pretreatment image defined by a pretreatment boundary with a section of an intraoperative image associated with the live anatomy boundary to register the pretreatment image with the intraoperative scene (corresponding disclosure in at least [00250], where an intraoperative (live anatomy boundary) is obtained “To further evaluate the performance of the 3D reconstruction method 500 in real-world surgical scenarios, we obtained intraoperative videos from various stereo imaging modalities during surgeries performed in a hospital and online videos”, then [00252], where samples are taken in the region “Fig. 22A shows samples of 2D input images taken using a stereoscopic camera during neurosurgery, and only images from the left camera of the steroescopic camera are shown. Fig. 22B shows reconstruction results of tissue surface viewed from a first angle”, and further [00254], where the sample and intraoperative data are registered, where both are in the same area “In this experiment, we simply set the pose threshold to determine key frames to a small number hence all five images were used as key frames. Such a high- resolution mosaicking of the neurosurgery cavity could conceivably be used to register the intraoperative or diagnostic MRI to the mosaicked stereo reconstruction of the surgical cavity to identify remnant brain tumor during surgery”); and
place a surgical instrument at a location based on the registration of the pretreatment image with the intraoperative scene (corresponding disclosure in at least [00283], where there is a surgical instrument that is placed in a location (there is deformation caused in the image, which is stated to be caused by an instrument) “Fig. 27C shows a set 558 of of input video frames of Hamlyn in vivo data with deformation caused by instrument interaction along with a corresponding set 560 of deformed templates including dots representing contol points”).
Jagadeesan does not teach an augmented reality headset and display, using the augmented reality headset, the live anatomy boundary overlaid on the intraoperative scene.
Casas, in a similar field of endeavor, teaches an augmented reality headset (corresponding disclosure in at least [0007], where real-time (intraoperative) images are viewed using augmented reality (AR) “Embodiments disclosed here describe a real-time surgery navigation method and apparatus for displaying an augmented view of the patient from the preferred static or dynamic viewpoint of the surgeon… Responsive to registering the images, a head mounted display may present to a surgeon an augmented view of the patient, wherein the augmented reality is presented via a head mounted display.”).
and display, using the augmented reality headset, the live anatomy boundary overlaid on the intraoperative scene (corresponding disclosure in at least [0085], where registration of the anatomy and the features (boundaries) are displayed for surgeons to view during surgery “The 3D/3D registration, such as the 3D volume-3D surface registration 120, as well as 2D/3D registration, is done by computer means 100. In embodiments, rigid registration methods are used, such as geometry-based, paired-point, surface-based, intensity-based, etc. In embodiments, nonrigid registration methods are used, e.g. feature-based, intensity-based, etc. In embodiments, biomechanical models, such as statistical shape models, are incorporated into the registration method” and further [0132], “These foreground objects can then be adjusted in transparency, color, etc. to permit the surgeon 128 to see the virtual graphics through them, e.g. the internal anatomy of the patient 118 in the registered 3D volume image, or the graphical representation of tracked instruments and devices 138”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Jagadeesan to incorporate the AR headset and use it for visualization of the live anatomy boundary overlaid on the intraoperative scene as taught by Casas. One of the ordinary skill in the art would have been motivated to incorporate this because it provides a method of viewing the combined real-time preoperative images with virtual graphics associated with the preoperative images with the user being able to freely move around the operating room (corresponding disclosure in at least [0006] and [0012] of Casas).
Regarding Claim 13, the combined references of Jagadeesan and Casas teach the limitations of Claim 12, and Jagadeesan further teaches wherein the instructions, when executed by the at least one processor (corresponding disclosure in at least [00108], where the system includes a processor and memory “the computational system 28 can include at least one processor and at least one memory”), further cause the system to the live anatomy boundary with digital samples from within a model boundary associated with the pretreatment image of the portion of the body (corresponding disclosure in at least [00246], where the 3D model is registered with the digital sample (CT image) of the anatomy body “we used the CT imaging of tissues as the gold standard. In this experiment…To quantify accuracy, we registered the 3D reconstructed model with the CT segmentation results by first manually selecting landmarks, such as tissue tips, edge points and other recognizable points, and then refining the registration with the ICP algorithm. ”).
Regarding Claim 14, the combination noted above teaches all of the limitations of Claim 12.
Specifically, Jagadeesan teaches wherein the model boundary is based on a three-dimensional reconstructed model of the portion of the body (corresponding disclosure in at least [00254] and Figure 23B, where 3D reconstruction results are viewed of a part of the body (the kidney), and the model boundary can be determined from the image “Fig. 23B shows reconstruction results of kidney surface viewed from a first angle”).
PNG
media_image3.png
400
247
media_image3.png
Greyscale
Figure 23B of Jagadeesan
Regarding Claim 15, the combined references of Jagadeesan and Casas teach the limitations of Claim 12, and Jagadeesan further teaches wherein the matching of the digital samples aid the system to register the three-dimensional reconstructed model with the at least the portion of the body (corresponding disclosure in at least [00246], where the 3D model is registered with the digital sample (CT image) of the anatomy body “we used the CT imaging of tissues as the gold standard. In this experiment…To quantify accuracy, we registered the 3D reconstructed model with the CT segmentation results by first manually selecting landmarks, such as tissue tips, edge points and other recognizable points, and then refining the registration with the ICP algorithm. ” , and further in [00264] and [00266] respectively, where the ICP method is used to match the sample and 3D model “For example, one group proposed a monocular vision-based method that first generated the tissue template and then estimated the template deformation by matching the texture and boundaries with a non-rigid iterative closet points (ICP) method”; “Hence, the ability to match the template and the input video when non-rigid deformation exists is essential for intraoperative use of deformation recovery methods”).
Regarding Claim 16, the combined references of Jagadeesan and Casas teach the limitations of Claim 12, and Jagasdeesan further teaches wherein the system comprises a markerless surgical navigation system (corresponding disclosure in at least [0003], where the system is a trackerless navigation system “The present disclosure relates generally to systems and methods for a trackerless navigation”).
Regarding Claim 17, the combined references of Jagadeesan and Casas teach the limitations of Claim 12, and Casas further teaches wherein the instructions, when executed by the at least one processor (corresponding disclosure in at least [0181], where there are one or more processors for executing instructions “Logic subsystem 602 may include one or more processors that are configured to execute software instructions”), further cause the system to establish communication between the augmented reality headset and one or more of a pretreatment computing device, a surgical navigation computing device, and a registration computing device (corresponding disclosure in at least [0158], where the AR headset is in connection or combination with a system of a motion capture device (navigation computing device) and a virtual anatomical model software (pretreatment computing device) “virtual reality headsets are used in combination with a 3D model of the surgeon 128 and/or the other users (e.g. obtained with a 3D scanner), and/or a motion capture device (e.g. time-of-flight camera) pointing at each user interacting with the system, and appropriate software (that e.g. uses a 3D virtual anatomical model of a person with articulated and moveable joints, blending with it the available 3D surface models of the users) to show the users within the field of view of the surgeon”).
Regarding Claim 18, the combined references of Jagadeesan and Casas teach the limitations of Claim 17, and Jagadeesan further teaches wherein the live anatomy boundary comprises a virtual object (corresponding disclosure in at least [00126], where there is a virtual object of the real object (live anatomy), which has a boundary, as shown in Figure 2 “As shown in Fig. 2, points pt are attached with a virtual object 112 obtained by rotating and scaling the real object around the control point”).
PNG
media_image4.png
624
789
media_image4.png
Greyscale
Figure 2 of Jagadeesan
Regarding Claim 19 the combined references of Jagadeesan and Casas teach the limitations of Claim 17, and Casas further teaches wherein the instructions, when executed by the at least one processor (corresponding disclosure in at least [0181], where the system includes one or more processors for executing instruction “Logic subsystem 602 may include one or more processors that are configured to execute software instructions”), further cause the system to: generate a model boundary from a first input of a first medical professional during a pretreatment process of the medical procedure, the first input comprises the first medical professional utilizing the pretreatment computing device (corresponding disclosure in at least [0071], where pretreatment image information is inputted by the user utilizing one of the imaging modalities “Logic subsystem 602 may include one or more processors that are configured to execute software instructions”, and further [0078], where the images are used to be converted into a 3D reconstruction, which would have a boundary “by comparing it with point clouds used to represent volumetric data from preoperative 104 or intraoperative imaging 106. In embodiments, computer means 100 make use of techniques of surface reconstruction 112 (e.g. Delaunay triangulation, alpha shapes, ball pivoting, etc.) for converting the point cloud to a 3D surface model (e.g. polygon mesh models, surface models, or solid computer-aided design models). The surface reconstruction 112 is fully automated by computer means 100, and can be assisted 314 by the surgeon 128 or other users through the available user interface means 130, 132.”); and generate the live anatomy boundary from a second input of a second medical professional during an intraoperative process of the medical procedure, the second input comprises the second medical professional (corresponding disclosure in at least [0162], where the boundaries can be drawn in the intraoperative step by the surgeon (medical professional) “preoperative planning done by software means is shown in real time blended with the preoperative 102 and intraoperative images 106 or their graphical representation, e.g. blended with the 3D volume, with the proposed measurements, angles, incisions, osteotomies, etc. drawn and marked… appropriate step in surgical technique or guide (from the manufacturer of equipment, or self-made by the surgeon 128 or other users) is shown each time a predefined previous step is completed or skipped (as interpreted automatically by computer means 100, or indicated by the surgeon 128 through real-time interface means 132) … This intraoperative help includes e.g. any aspect of the surgery or technique, such as plates, screws, sutures, or any other instruments or devices 138, their different sizes, shapes, materials, or a combination of them, available as virtual graphical representations”) utilizing the augmented reality device to:
indicate the live anatomy boundary of the intraoperative image; indicate the alignment of the live anatomy boundary with the section of interest of the at least a portion of the body; or a combination thereof (corresponding disclosure in at least [0162], where the boundary can be drawn in by the surgeon from the intraoperative image (real-time means) “The planning is done and displayed as virtual graphics, either as a 2D drawing or design, or as 3D representation in any of the available file (e.g. STL) or video formats, in stereoscopic manner or not. For example, the appropriate step in surgical technique or guide (from the manufacturer of equipment, or self-made by the surgeon 128 or other users) is shown each time a predefined previous step is completed or skipped (as interpreted automatically by computer means 100, or indicated by the surgeon 128 through real-time interface means 132)” and further in [0163], “augmented reality helps predict the outcome of a procedure. As one example, a virtual model of a flap drawn and displayed stereoscopically blended with the patient's 118 3D surface or 3D volume can be manipulated virtually by the surgeon 128 within his or her field of view, for demonstrating potential outcomes of microsurgery, by adding shape-mapped, scale-mapped, and texture-mapped images blended with the target donor or acceptor portion of the patient 118, or both.”).
Regarding Claim 20 the combined references of Jagadeesan and Casas teach the limitations of Claim 17, and Jagadeesan further teaches wherein the instructions further cause the system to provide guidance for a surgical procedure (corresponding disclosure in at least [0003], where “The systems and methods provided herein have wide applicability, for example, including guided medical, such as surgical, or industrial or other exploratory procedures”) based on a registration of the pretreatment image with the intraoperative scene (corresponding disclosure in at least [00246], where registration of the surgical model is completed using a diagnostic (pretreatment) CT “To quantify accuracy, we registered the 3D reconstructed model with the CT segmentation results by first manually selecting landmarks, such as tissue tips, edge points and other recognizable points, and then refining the registration with the ICP algorithm.”)
Response to Arguments
Applicant's arguments filed 10/30/25 regarding the Drawing objections have been fully considered and are withdrawn in light of the amendments.
Applicant's arguments filed 10/30/25 regarding the 35 U.S.C. 112b rejections have been fully considered, but the rejection is maintained due to the newly made amendments.
Applicant's arguments filed 10/30/25 regarding the 35 U.S.C. 102(a)(1) rejections have been fully considered but they are not persuasive.
Regarding Claim 1, applicant argues that Jagadeesan does not teach “generating a model boundary in the three-dimensional reconstructed model based on a section of interest”. However, as outlined in the office action above and in Figure 23B of Jagadeesan, a 3D model is reconstructed (“three-dimensional reconstructed model”), and as viewed in the figure, there is a boundary (outline) of the 3D reconstructed model that is shown. Further, [0253] recites “Fig. 23B shows reconstruction results of kidney surface”, where the kidney is the section of interest.
PNG
media_image5.png
429
253
media_image5.png
Greyscale
Figure 23B of Jagadeesan
Regarding Claim 10, applicant argues Jagadeesan “wherein the model boundary comprises a shape, the shape being drawn by a medical professional”. However, under broadest reasonable interpretation, the manual selection of points is interpreted as drawing the shape, as the boundary outline of the model is being “drawn” through these selections, to provide a more refined surface model. [00245] of Jagadeesan recites steps of accurately reconstructing the model, which includes determining an accurate shape (“Surfaces of livers and kidneys are very smooth and have low textures, but the proposed method was still able to reconstruct the 3D models… To quantify accuracy, we registered the 3D reconstructed model with the CT segmentation results by first manually selecting landmarks”).
Further, applicant argues Jagadeesan does not teach “matching digital samples from within the model boundary with digital samples from within the live anatomy boundary to register the three-dimensional reconstructed model with the at least the portion of the body”. This limitation is not found within claim 10. However, as detailed in the rejection of claim 1 from the office action above and further in [00255], the limitation is taught wherein a model boundary and live anatomy boundary are registered (“the surgeon scanned the exposed kidney surface using a stereo laparoscope. The 3D reconstructed model of the kidney surface and the tumor is shown in Fig. 23. This model could further be registered to the diagnostic CT or MRI to plan the extent of surgical resection intraoperatively”).
Applicant's arguments filed 10/30/25 regarding the 35 U.S.C. 103 rejections have been fully considered but they are not persuasive.
Regarding Claim 12, applicant argues Jagadeesan does not teach “match a section of a pretreatment image defined by a pretreatment boundary with a section of an intraoperative image associated with the live anatomy boundary to register the pretreatment image with the intraoperative scene. However, the pretreatment image is construed to be the MRI image (diagnostic image, which is pretreatment), which is further used for registration to an intraoperative image (the high-resolution mosaicking of the neurosurgery cavity) as recited in [00254] “ Such a high- resolution mosaicking of the neurosurgery cavity could conceivably be used to register the intraoperative or diagnostic MRI to the mosaicked stereo reconstruction of the surgical cavity to identify remnant brain tumor during surgery”, the aforementioned mosaicking coming from interoperative images ([00251] “ we obtained intraoperative stereo microscope images during a neurosurgery case”). Thus, the limitations are taught by Jagadeesan.
Applicant's arguments filed 10/30/25 regarding the 35 U.S.C. 101 have been fully considered but they are not persuasive. The amendments to independent claims 1 and 12 do not direct the claims towards a practical application. The amendments further covers performance of the limitation in the mind (Read rejection under 35 U.S.C. 101 in office action above). Although the surgical instrument may be considered an additional element, it does not amount to no more than mere instructions to apply the exception.
All remaining dependent claims are rejected due to their dependency to the independent claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN KIM whose telephone number is (571)272-1821. The examiner can normally be reached Monday-Friday 6-2 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached on (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.E.K./Examiner, Art Unit 3797
/SERKAN AKAR/Primary Examiner, Art Unit 3797