DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Landon et al (US 20230019873 A1).
Regarding claim 1, Landon discloses a system for computer assisted navigation during a surgery, comprising a computer platform operative ([0039] surgical navigation systems can aid surgeons in locating patient anatomical structures, guiding surgical instruments, and implanting medical devices with a high degree of accuracy.) to:
obtain a computed tomography (CT) image of a pelvic region of a patient captured prior to the surgery ([0171] creation of one or more 3D models from 2D image data. 2D image data can be acquired with less cost than volumetric image data such as MRI or CT images), wherein the CT image of the pelvic region includes a target surgical area and a non-target surgical area ([0067] the CASS 100 can include one or more powered reamers connected or attached to a robotic arm 105A or end effector 105B that prepares the pelvic bone to receive an acetabular implant according to a surgical plan),
generate from the CT image a three-dimensional (3D) CT volume of the target surgical area which excludes the non-target surgical area ([0067] the CASS 100 can power off the reamer or instruct the surgeon to power off the reamer. 2D to 3D modeling methods can provide greater confidence with respect to bone volume predictions, such as for areas of the bone that are inaccessible to a probe);
obtain a fluoroscopy image of the pelvic region of the patient captured during the surgery ([0059] systems that may be employed for tissue navigation include fluorescent imaging systems and ultrasound systems.),
merge the 3D CT volume with the fluoroscopy image ([0060] Display 125 overlays image information collected from various modalities (e.g., CT, MRI, X-ray, fluorescent, ultrasound, etc.) collected pre-operatively or intra-operatively to give the surgeon various views of the patient's anatomy as well as real-time conditions.), and
register the target surgical area based on the merged 3D CT volume and fluoroscopy image ([0176] the key points may be obtained by intersecting projected rays of 2D image landmarks in 3D space relative to a 3D candidate bone model.).
Regarding claim 2, Landon discloses wherein the surgery includes a total hip arthroplasty ([0222] custom three-dimensional model of a hip joint may be desired for planning a total hip arthroplasty or a revision total hip arthroplasty).
Regarding claim 3, Landon discloses wherein the target surgical area includes the acetabulum ([0064] in the context of hip surgeries, the CASS 100 may include a powered, robotically controlled end effector to ream the acetabulum to accommodate an acetabular cup implant.).
Regarding claim 4, Landon discloses wherein the fluoroscopy image includes a plurality of fluoroscopy images of the pelvic region that are orthogonal to each other ([0172] a system may receive 801 a plurality of 2D images (e.g., projection radiography, plain film X-ray, cone-beam X-ray, fluoroscopy, tomography, echocardiography, ultrasound, or any known or future 2D image format) that capture at least a portion of a patient's bony anatomy (e.g., one or more bones, or bone segments, forming a joint or region of interest).).
Regarding claim 5, Landon discloses wherein the fluoroscopy image includes the non-target surgical area ([0104] surgeon 111 can manipulate the image display to provide different anatomical perspectives of the target area and can have the option to alter or revise the planned bone cuts based on intraoperative evaluation of the patient).
Regarding claim 6, Landon discloses wherein the non-target surgical area includes at least a portion of a femur of the patient ([0106] the display 125 can provide information about the gaps (e.g., gap balancing) between the femur and tibia and how such gaps will change if the planned surgical plan is carried out.).
Regarding claim 7, Landon discloses wherein the CT image contain an image of the patient in a supine position ([0205] the pre-determined view may correspond to a specific pose of the patient which is common in clinical settings, e.g. supine position, standing position, or seated position).
Regarding claim 8, Landon discloses wherein applying the set of image merge rules includes applying at least one rule identifying obstructing anatomy in the CT image ([0178] due to the consistency and accuracy in the landmarking 803, the system may calculate one or more properties of the bones of the patient. In other embodiments, various dimensions and/or deformities of the bones may be identified and/or calculated.).
Regarding claim 9, Landon discloses wherein the merge operation includes:
segmenting the CT image using a first artificial neural network (ANN) to identify a volume of a plurality of bones in the CT image ([0178] computing device may identify (i.e., auto-landmark) the one or more key points (e.g., based on machine learning, artificial intelligence, artificial neural networks, or the like).), and
generating a 3D bone model for the pelvis which excludes the femur ([0189] each bone of interest could be modeled through the process 900 in a separate step.).
Regarding claim 10, Landon discloses wherein the fluoroscopy image includes at least two fluoroscopy images of the pelvic region ([0195] the 2D images may comprise fluoroscopy images, projectional radiographs, 2D computed tomography images, 2D echocardiography images, ultrasound images)
Regarding claim 11, Landon discloses merging the 3D bone model with the at least two fluoroscopy images of the pelvic region using a second artificial neural network (ANN) ([0193] , the computing device may be able to co-register the plurality of 2D images by automatically recognizing common features and aligning the 2D images accordingly).
Regarding claim 12, Landon discloses wherein merging the 3D bone model includes:
generating synthetic fluoroscopy images at various projected angles as digitally reconstructed radiographs (DRRs) and comparing the DRRs to the at least two fluoroscopy images ([0195] , various forms of 2D images are contemplated. In further embodiments, the 2D images may comprise fluoroscopy images, projectional radiographs, 2D computed tomography images, 2D echocardiography images, ultrasound images, and the like.),
identifying a best match between the DRRs and the fluoroscopy images ([0196]) additional views of the plurality of bones may be provided. In some embodiments, for example, images of a femur and/or a tibia may be provided from an anterior-posterior (AP) view and/or a medial-lateral (ML) view, and
registering the location of the target surgical area based on the best match ([0196] a corresponding 2D image from a second view for each of the 2D images shown. In other embodiments, only some of the 2D images from a first view may have a corresponding 2D image from a second view.).
Regarding claim 13, Landon discloses wherein the first ANN includes a trained machine learning algorithm that is trained by providing a CT image dataset including annotated target surgical areas and non-target surgical areas ([0153] Once the dataset has been established, it may be used to train a machine learning model (e.g., RNN) to make predictions of how surgery will proceed based on the current state of the CASS).
Regarding claim 14, Landon discloses wherein the second ANN includes a trained machine learning algorithm that is trained by providing a fluoroscopy image data set including annotated target surgical areas and non-target surgical areas ([0154] the machine learning model may be trained not only with the state of the CASS 100, but also with patient data (e.g., captured from an EMR) and an identification of members of the surgical staff.).
Regarding claim 15, Landon discloses wherein registering the target surgical area includes generating a registration matrix of the target surgical area ([0055] specific objects can be manually registered by a surgeon with the system preoperatively or intraoperatively. For example, by interacting with a user interface, a surgeon may identify the starting location for a tool or a bone structure.).
Regarding claim 16, Landon discloses wherein the computer platform further displays the registration matrix on a user interface in a surgical area during the surgery ([0060] Display 125 provides graphical user interfaces (GUIs) that display images collected by the Tissue Navigation System 120 as well other information relevant to the surgery.).
Regarding claim 17, Landon discloses wherein the computer platform is further operative to:
generate a model of the target surgical area based on the registered location ([0061] Surgical Computer 150 provides control instructions to various components of the CASS 100, collects data from those components, and provides general processing for various data needed during surgery.).
Regarding claim 18, Landon discloses A computer program product comprising a non-transitory computer readable medium storing instructions executable by at least one processor to perform operations for computer assisted navigation during surgery ([0229] The data processing system 1700 can be a symmetric multiprocessor (SMP) system that can include a plurality of processors in the processing unit 1703. Alternatively, a single processor system may be employed.) to:
obtain a computed tomography (CT) image of a pelvic region of a patient captured prior to the surgery ([0171] creation of one or more 3D models from 2D image data. 2D image data can be acquired with less cost than volumetric image data such as MRI or CT images), wherein the CT image of the pelvic region includes a target surgical area and a non-target surgical area ([0067] the CASS 100 can include one or more powered reamers connected or attached to a robotic arm 105A or end effector 105B that prepares the pelvic bone to receive an acetabular implant according to a surgical plan),
generate from the CT image a three-dimensional (3D) CT volume of the target surgical area which excludes the non-target surgical area ([0067] the CASS 100 can power off the reamer or instruct the surgeon to power off the reamer. 2D to 3D modeling methods can provide greater confidence with respect to bone volume predictions, such as for areas of the bone that are inaccessible to a probe);
obtain a fluoroscopy image of the pelvic region of the patient captured during the surgery ([0059] systems that may be employed for tissue navigation include fluorescent imaging systems and ultrasound systems.),
merge the 3D CT volume and the fluoroscopy image ([0060] Display 125 overlays image information collected from various modalities (e.g., CT, MRI, X-ray, fluorescent, ultrasound, etc.) collected pre-operatively or intra-operatively to give the surgeon various views of the patient's anatomy as well as real-time conditions.), and
register the target surgical area based on the merged 3D CT volume and fluoroscopy image ([0176] the key points may be obtained by intersecting projected rays of 2D image landmarks in 3D space relative to a 3D candidate bone model.).
Regarding claim 19, Landon discloses segmenting the CT image using a first artificial neural network (ANN) to identify a volume of a bone in the CT image ([0178] computing device may identify (i.e., auto-landmark) the one or more key points (e.g., based on machine learning, artificial intelligence, artificial neural networks, or the like).),, and
generating the 3D volume for the identified volume ([0189] each bone of interest could be modeled through the process 900 in a separate step.).,
wherein the fluoroscopy image includes at least two fluoroscopy images of the pelvic region [0195] the 2D images may comprise fluoroscopy images, projectional radiographs, 2D computed tomography images, 2D echocardiography images, ultrasound images)
Regarding claim 20, Landon discloses wherein the target surgical area includes the acetabulum ([0064] in the context of hip surgeries, the CASS 100 may include a powered, robotically controlled end effector to ream the acetabulum to accommodate an acetabular cup implant.).,
wherein the fluoroscopy image includes a plurality of fluoroscopy images of the pelvic region of the patient captured during the surgery and including the non-target surgical area ([0172] a system may receive 801 a plurality of 2D images (e.g., projection radiography, plain film X-ray, cone-beam X-ray, fluoroscopy, tomography, echocardiography, ultrasound, or any known or future 2D image format) that capture at least a portion of a patient's bony anatomy (e.g., one or more bones, or bone segments, forming a joint or region of interest).).,
wherein the non-target surgical area includes at least a portion of a femur of the patient ([0104] surgeon 111 can manipulate the image display to provide different anatomical perspectives of the target area and can have the option to alter or revise the planned bone cuts based on intraoperative evaluation of the patient)., and
wherein the CT image contain an image of the patient in a supine position ([0205] the pre-determined view may correspond to a specific pose of the patient which is common in clinical settings, e.g. supine position, standing position, or seated position).
.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIVANG I PATEL/Primary Examiner, Art Unit 2615