DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-19 are pending in this application. Claims 15-19 are withdrawn, and Claims 1-14 have
been examined on the merits.
Election/Restrictions
Applicant’s election without traverse of claims 10-20 drawn to Invention III and species B from Group II in the reply filed on 05/16/25 is acknowledged.
Claims 15-19 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected Invention II (Claim 15) drawn to a computing device, Invention III (Claims 16-17) drawn to a system for assisting positioning of a tool, and Invention IV (Claims 18-19) drawn to a computer program product, there being no allowable generic or linking claim.
Claim Objections
Claims 2-3, 5, and 9 are objected to because of the following informalities:
In claim 2, line 3 “an anatomical 3D shape” should be “the anatomical 3D shape”
In claim 3, line 3 “a multitude of annotated imaging data sets” should be “the multitude of annotated imaging data sets”
In claim 3, line 6 “capturing body parts” should be “capturing the body parts”
In claim 5, line 10 “the segmented intraoperative imaging data” should be “segmented intraoperative imaging data”
In claim 9, line 3 “positioning guidance data” should be “the positioning guidance data”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-3, 6-8, and 14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation “capturing body parts corresponding to the specific body part”. It is unclear whether there are multiple body parts being captured, or if there is a single body part that is being focused on, with multiple perspectives being imaged of the specific body part alongside nearby structures also being imaged. For purposes of examination, the limitation will be construed as one particular body part being imaged from multiple perspectives alongside nearby structures. However, further clarification is required.
Claim 6 recites the limitation “estimating, by the computing device (10), the perspectives”. It is unclear what is meant by estimating and what specifically is being estimated. For purposes of examination, the limitation will be construed as estimating, or localizing the direction or projection that each perspective is coming from. However, further clarification is required.
Claim 7 recites the limitation of “a tool geometric model indicative of a geometry of the tool”. Is it unclear whether this is referencing a tool/medical instrument that is used during the imaging for a medical procedure, or a specialized tool by the applicant. For purposes of examination, the limitation will be construed as any medical instrument for use in medical procedures, which is captured in the imaging data. However, further clarification is required.
Claim 8 (and similarly claim 12) recites the limitation of “a tool geometrical model”. It is unclear whether this is the same tool recited in Claim 7, which first recites the tool geometrical model. For purposes of examination, Claim 8 will be construed to depend from Claim 7, which first recites the tool geometrical model. However, further clarification is required.
Claim 14 recites the limitation of “repeatedly or continuously for a period of time in preparation of/preceding a surgical treatment of the patient”. It is unclear what the bounds of a period of time are, and how long/the number of times the steps are repeated for i.e. whether data is continuously being acquired until a certain amount of time before the procedure. For purposes of examination, the limitation will be construed as the method/device being capable of repeating the steps. However, further clarification is required.
All remaining claims are rejected due to their dependency to the claims above.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 4, 6-9, and 12-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Blau (US20220044440A1, cited in Applicant IDS).
Regarding Claim 1,
Blau teaches a computer-implemented method of assisting positioning of a tool (5) with respect to a specific body part (202) of a patient (200) (corresponding disclosure in at least [0008], where there is a method for assisting positioning of a tool (the surgical tool is visible in the image with a coordinate system, assisting in the positioning “object of the invention to provide a device and/or a method enabling a 3D representation and localization of at least one object at least partially visible in an X-ray projection image with respect to a coordinate system”), the method comprising:
a) receiving, by a computing device (10), intraoperative imaging data (ID) from an imaging device (20) arranged in a proximity of the patient (200) (corresponding disclosure in at least [0080], where intraoperative image data is used “use preoperative imaging to generate information about the 3D shape of the patient's specific bone, instead of working with general statistical models that describe the variability of bones”),
the intraoperative imaging data (ID) comprising 2D images, a plurality of the 2D images capturing the specific body part (202) of the patient (200) from a plurality of different perspectives with respect to the specific body part (202) of the patient (200) (corresponding disclosure in at least [0083], where there are multiple 2D images that makes up the total image, at different parts of the body part “Deep Morphing may be used to detect the contour of Object F and label points on its contour in each 2D X-ray image.”) and one or more of the plurality of the 2D images capturing at least a part of the tool (5) from at least one perspective (corresponding disclosure in at least disclosure in at least [0084], where the images are taken in different perspectives (angles) “taking into account the 3D angle between the imaging directions, which may be determined using two different procedures. The more precisely this angle can be determined, the more precise the 3D registration may be”);
b) reconstructing, by the computing device (10), an anatomical 3D shape (AS) of the specific body part (202) using an artificial-intelligence based algorithm corresponding to the specific body part (202) based on the intraoperative imaging data (ID) and data indicative of perspectives corresponding to the plurality of the 2D images (corresponding disclosure in at least [0040], where reconstruction is performed using artificial intelligence (ML) at different perspectives “ f multiple X-ray images are acquired, the information from them may be fused (or registered) to increase the accuracy of 3D reconstruction and/or determination of spatial positions or orientations. It is preferable if these X-ray images are acquired from different imaging directions because this may help resolve ambiguities. The more different the imaging directions are (e.g., AP and ML images), the more helpful the images may be in terms of a determination of 3D information” and further in [0028], where artificial intelligence (DNN) is used corresponding disclosure in at least to a body part “ a deep neural net (DNN) may be utilized for a classification of an object in an X-ray projection image”);
c) estimating, by the computing device (10), a current position (5c) of the tool (5) with respect to the anatomical 3D shape (AS) of the specific body part (202) based on the intraoperative imaging data (ID) (corresponding disclosure in at least [0135], where steps are outlined to determine a position of the tool (nail) based on the image “ A deep-morphing approach detects the outline of the nail assembly (nail, aiming device, and interface) in the X-ray image, taking into account the viewing direction (AP or ML).”); and
d) generating, by the computing device (10), positioning guidance data (GD) comprising a visual representation of the estimated current position (5c) of the tool (5) with respect to the anatomical 3D shape (AS) of the specific body part (202) (corresponding disclosure in at least [0142], where a visualization is generated in a 3D model “A visualization of the 3D model of the nail assembly and/or the proximal femur may be shown in the X-ray projection image. This could be the outline of their 2D projection or a rendering of the 3D model”).
Regarding Claim 2,
Blau further teaches wherein the step of reconstructing, by the computing device (10), an anatomical 3D shape (AS) of the specific body part (202) comprises training the artificial-intelligence based algorithm using a multitude of annotated imaging data sets capturing body parts corresponding to the specific body part (202) of the patient (200), wherein the annotations comprise data identifying and/or describing properties of the body part (corresponding disclosure in at least [0032], where there are multiple 2D images of the body part (bone of interest) “A neural net may be trained based on a multiplicity of data that is comparable to the data on which it will be applied. In case of an assessment of bone structures in images, a neural net should be trained on the basis of a multiplicity of X-ray images of bones of interest. It will be understood that the neural net may also be trained on the basis of simulated X-ray images”, further in [0033], where the algorithm determines the structure “ a first neural net may be trained to evaluate X-ray image data so as to classify an anatomical structure in the 2D projection image”).
Regarding Claim 4,
Blau further teaches wherein the step of reconstructing the anatomical 3D shape (AS) comprises: a) segmenting the intraoperative imaging data (ID) in order to identify the specific body part (202) of the patient (200) (corresponding disclosure in at least [0029], where there is a first step of segmenting the data to determine the body part (object) “ The technique consists of a two-stage approach (called deep morphing), where in the first stage a neural segmentation network detects the contour (outline) of the bone or other object”); and
b) reconstructing the anatomical 3D shape (AS) further using the segmented intraoperative imaging data (ID) (corresponding disclosure in at least [0029], where there is a further step of fitting the model to a shape to then be used for reconstruction “then in the second stage a statistical shape model is fit to this contour using a variant of an Active Shape Model algorithm (but other algorithms can be used as well for the second stage)” and further in [0076] “Given a 3D statistical shape or appearance model of the same anatomical structure, this model can then be deformed in a way that its virtual projection matches the actual projection in the X-ray image, hence leading to a 3D reconstruction of the anatomical structure and allowing a localization of the object and determination of the imaging direction”).
Regarding Claim 6,
Blau further teaches estimating, by the computing device (10), the perspectives corresponding to the intraoperative imaging data (ID) (corresponding disclosure in at least [0076], where the perspectives for the imaging data are estimated (the views/orientation are localized to determine the direction of the projection) “Given a 3D statistical shape or appearance model of the same anatomical structure, this model can then be deformed in a way that its virtual projection matches the actual projection in the X-ray image, hence leading to a 3D reconstruction of the anatomical structure and allowing a localization of the object and determination of the imaging direction” and further in [0077], where each individual perspective (2D image) is used to estimate (localize) “ the present invention is also able to localize an implant or instrument relative to anatomy based on one X-ray image”).
Regarding Claim 7,
Blau further teaches wherein estimating perspectives corresponding to the intraoperative imaging data (ID) is performed using a tool geometrical model indicative of a geometry of the tool (5) and comprises:
a) computing a plurality of projections of the tool geometrical model from a plurality of candidate perspectives (corresponding disclosure in at least [0077], where the tool (instrument) has multiple projections “Being able to provide a 3D reconstruction of anatomy based on one X-ray image only is an advantage over the state-of-the-art, which requires the acquisition of at least two images from different viewing directions (typically an AP and an ML image). Moreover, the present invention is also able to localize an implant or instrument relative to anatomy based on one X-ray image” and further in [0147] ,where it’s taught that multiple perspectives (imaging direction) are computed “As mentioned before, the present invention and using the invention disclosed in Blau 917 allow a determination of the imaging direction based on independent approaches utilizing different information”) ; and
b) identifying the perspectives corresponding to the intraoperative imaging data (ID) by comparing the at least part of the tool (5) as captured by the respective 2D image of the intraoperative imaging data (ID) with the plurality of projections from the plurality of candidate perspectives (corresponding disclosure in at least [0039], where the plurality of projections (the imaging direction) is compared with the projections (the imaging direction of the object) for determining projection direction “Based on one image of an anatomical object, the model is deformed in such a way that its virtual projection matches the actual projection of the object in the X-ray image. Doing so allows a computation of an imaging direction (which describes the direction in which the X-ray beam passes through the object). As an additional plausibility check, the computed imaging direction may then be compared with the imaging direction for the same object that is determined”).
Regarding Claim 8,
Blau further teaches wherein estimating the current position (Sc) of the tool (5) is performed using a tool geometrical model indicative of a geometry of the tool (5), the step of estimating the current position (Sc) of the tool (5) comprising at least one of the following steps: a) comparing a projection of the tool geometrical model - onto the plane of one or more of the 2D images of the intraoperative imaging data (ID) - with the at least part of the tool (5) as captured by the respective 2D image of the intraoperative imaging data (ID) (corresponding disclosure in at least [0085], where the tool (the nail) being captured in the image and being compared to a plane of the 2D image (projection) “One way of determining this angle would be to determine the imaging directions as disclosed in Blau 917 for each X-ray image and to compute their difference. Another way may be to utilize another object in the X-ray image (called “Object G”) whose model is deterministic (e.g., a nail connected to an aiming device). By matching the virtual projection of Object G to its actual projection in each X-ray image, the imaging directions for Object G may be determined”);
And b) determining a position of the tool geometrical model that produces a projection onto the planes of the 2D images of the intraoperative imaging data (ID) that matches the at least part of the tool (5) as captured by the respective 2D image of the intraoperative imaging data (ID).
Regarding Claim 9,
Blau further teaches wherein generating positioning guidance data (GD) comprises overlaying the visual representation of the estimated current position (Sc) of the tool (5) onto a visual representation of the reconstructed anatomical 3D shape (AS) (corresponding disclosure in at least [0036], where the tool (imaged object) is overlaid onto the image “When displaying the X-ray projection image, geometrical aspects and/or dimensions may be shown as an overlay in the projection image. Alternatively and/or additionally, at least a portion of the model may be shown in the X-ray image, for example as a transparent visualization or 3D rendering, which may facilitate an identification of structural aspects of the model and thus of the imaged object by a user”).
Regarding Claim 12,
Blau further teaches at least one of the following steps: a) providing a tool (5) in accordance with a tool geometrical model; and b) capturing, using an imaging device (20), intraoperative imaging data (ID) capturing at least a body part of the patient (200) and at least a part of the tool (5); and c) controlling, by the computing device (10), a display device (30) to display at least part of the guidance data (GD) (corresponding disclosure in at least [0041], where there is a tool that is being provided (in accordance with step a)) “ Another way may be to utilize another object (e.g. “Object D”), also shown in both images, whose model is deterministic (e.g., a nail connected to an aiming device). By matching the virtual projection of Object D to its actual projection in each X-ray image”).
Regarding Claim 13,
Blau further teaches wherein the intraoperative imaging data (ID) comprises one or more of: a) radiation-based images, in particular X-ray image(s) of the specific body part (202) of the patient (200) respectively a part of the tool (5);b) ultrasound image(s) of the specific body part (202) of the patient (200) respectively a part of the tool (5);c) arthroscopic image(s) of the specific body part (202) of the patient (200) respectively a part of the tool (5);d) optical imagery of the specific body part (202) of the patient (200) respectively a part of the tool (5); and e) cross-sectional imaging of the patient (200) respectively a part of the tool (5) (corresponding disclosure in at least [0041], where there is an x-ray image of a specific body part wit a part of the tool (object) “for each X-ray image and to compute their difference. Another way may be to utilize another object (e.g. “Object D”), also shown in both images, whose model is deterministic (e.g., a nail connected to an aiming device). By matching the virtual projection of Object D to its actual projection in each X-ray image, the imaging directions for Object D may be determined”).
Regarding Claim 14,
Blau further teaches wherein the steps of: receiving intraoperative imaging data (ID); estimating a current position (Sc) of the tool (5); and generating guidance data (GD) are carried out repeatedly or continuously for a period of time in preparation of/ preceding a surgical treatment of the patient (200) (corresponding disclosure in at least [0143], where steps including estimating a position of the tool (object) and generating guidance data are carried out repeatedly “Step 11: Steps 2 through 10 may be repeated. When processing a new image, the system considers any information (in particular, about image and object characteristics) it has gathered from previously processed images” and further in [0132]-[0142], where the steps are described).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3, 5, and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Blau (US20220044440A1 cited in Applicant IDS) in view of Siemionow (US20190142519A1, cited in Applicant IDS).
Regarding Claim 3, Blau teaches the limitations of Claim 2, and further teaches generating a multitude of annotated imaging data sets - comprising 2D image(s) capturing body parts corresponding to the specific body part (202) of the patient (200) from a plurality of different perspectives -(corresponding disclosure in at least [ ), but does not teach annotated 3D imaging data, in particular computed tomography CT scans, capturing body parts corresponding to the specific body part (202) of the patient (200).
Siemionow, in a similar field of endeavor, teaches a similar concept (surgical planning) of annotated 3D imaging data, in particular computed tomography CT scans, capturing body parts corresponding to the specific body part (202) of the patient (200) (corresponding disclosure in at least [0161], where CT scans are captured in regards to a specific body part of the patient “a set of samples are generated first, wherein LDCT images and HDCT images of the same object (such as an artificial phantom or a lumbar spine) are captured using the computed tomography device” and further in [0029], where the annotated images (labeled) are used “receives segmentation learning data comprising a plurality of batches of labeled anatomical image sets, each image set comprising image data representative of a series of slices of a three-dimensional bony structure of the anatomy”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the CT scans capturing body parts corresponding disclosure in at least to the specific body part as taught by Siemionow. One of the ordinary skill in the art would have been motivated to incorporate this because CT scans provide high resolution anatomical data, which is advantageous for use when training neural networks.
Regarding Claim 5, Blau teaches the limitations of Claim 4 and further teaches wherein the step of segmenting the intraoperative imaging data (ID) comprises: a) identifying region(s) of interest within the intraoperative imaging data (ID) containing the specific body part (202) of the patient (200) using at least one of an artificial-intelligence based detection and segmentation models and a convolutional neural network based detection and segmentation model (corresponding disclosure in at least [0083], where a regio of interest (Object F, which refers to the specific body part, thus the region of interest is identified in each image through the detection network “by a statistical shape or appearance model and called “Object F” in this section) based on two or more X-ray images, the procedure outlined above for one image may be extended to two or more images. That is, Deep Morphing may be used to detect the contour of Object F and label points on its contour in each 2D X-ray image. Given a 3D statistical shape model of Object F, this model can then be deformed in a way that its virtual projections simultaneously match the actual projections of Object F in two or more X-ray images as closely as possible”); and
b) segmenting the region(s) of interest using the artificial- intelligence based detection and segmentation model to thereby generate the segmented intraoperative imaging data (ID) (corresponding disclosure in at least [0076], where AI-based detection is used for segmentation of the image, specifically the region of interest (femur) “he outline/contour of a bone and label points on the contour. For instance, in the segmentation of a femur, the technique is able to determine which points on the contour in the 2D X-ray projection image correspond to the lesser trochanter, and which points correspond to the femoral neck, etc”), but does not teach semantically segmenting the regions of interest.
Siemionow, in a similar field of endeavor, teaches a similar concept of semantically segmenting the regions of interest (corresponding disclosure in at least [0172], where the regions of interest (part of anatomy) are semantically segmented using AI-based detection “The final layer for binary segmentation recognizes two classes (bone and no-bone). The semantic segmentation is capable of recognizing multiple classes, each representing a part of the anatomy”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated semantically segmenting the regions of interest as taught by Siemionow. One of the ordinary skill in the art would have been motivated to incorporate this because semantic segmentation provides high precision segmentation as each pixel is classified, which is beneficial in training data.
Regarding Claim 10, Blau teaches the limitations of Claim 1, but does not teach identifying, by the computing device (10), a prescribed position (Sp) of the tool (5) with respect to the anatomical 3D shape (AS) of the specific body part (202); and b) overlaying a visual representation of the prescribed position (5p) of the tool (5) onto a visual representation of the estimated current position (Sc) of the tool (5).
Siemionow, in a similar field of endeavor, teaches a similar concept of a) identifying, by the computing device (10), a prescribed position (Sp) of the tool (5) with respect to the anatomical 3D shape (AS) of the specific body part (202); and b) overlaying a visual representation of the prescribed position (5p) of the tool (5) onto a visual representation of the estimated current position (Sc) of the tool (5) (corresponding disclosure in at least [0125], where there is a prescribed position of the tool (suggested position of the instrument according to the preoperative plan) as well as an overlaid representation including where the real instrument is located “may demonstrate a mismatch between a supposed/suggested position of the instrument according to the pre-operative plan 161, displayed as a first virtual image of the instrument 164A located at its supposed/suggested position, and an actual position of the instrument, visible either as the real instrument via the see-through display and/or a second virtual image of the instrument 164B overlaid on the current position of the instrument” and Figure 3E further highlighting the overlaid image).
PNG
media_image1.png
358
631
media_image1.png
Greyscale
Figure 3E of Siemionow
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated identifying a prescribed position with respect to the anatomical shape and overlaying the prescribed position over the estimated current position as taught by Siemionow. One of the ordinary skill in the art would have been motivated to incorporate this because determining the proper trajectory of a device is essential to ensure accuracy and safety during a procedure.
Regarding Claim 11, Blau and Siemionow teach the limitations of Claim 10 and Siemionow further teaches at least one of the following steps: a) retrieving, by the computing device (10), the prescribed position (5p) of the tool (5) from a datastore comprised by or communicatively connected to the computing device (10); and b) computing the prescribed position (5p) of the tool (5) by the computing device (10), the prescribed position (5p) of the tool (5) being determined by an optimization function based on the anatomical 3D shape (AS) of the body part as well as data indicative of a surgical procedure (corresponding disclosure in at least [0107], where the prescribed position (surgical navigation image contains the prescribed position) is stored in a database “ It generates a surgical navigation image 142A comprising data of at least one of: the pre-operative plan 161 (which are generated and stored in a database before the operation), data of the intra-operative plan 162 (which can be generated live during the operation), data of the patient anatomy scan 163 (which can be generated before the operation or live during the operation) and virtual images 164 of surgical instruments used during the operation (which are stored as 3D models in a database)” and further in [0113], where the surgical navigation image comprises the prescribed position (suggested placement) “, the surgical navigation image may further comprise a 3D image 171 representing at least one of: the virtual image of the instrument 164 or surgical guidance indicating suggested (ideal) trajectory and placement of surgical instruments”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Prior arts include US20170084036A1 (imaging and positioning of an instrument) and US20200167438A1 (prediction and image segmentation).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN KIM whose telephone number is (571)272-1821. The examiner can normally be reached Monday-Friday 6-2 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.E.K./Examiner, Art Unit 3797
/SERKAN AKAR/Primary Examiner, Art Unit 3797