DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to because
Paragraph [0033] of Specification recited “FIG. 56”. FIG. 56 does not exist in the drawing figures either.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claim1-14 are objected to because of the following informalities:
Claim 1 recites the limitation "the patient registration model" in line 7. There is insufficient antecedent basis for this limitation in the claim.
The corresponding dependent claims did not cure the deficiency of Claim 1, are therefore objected for the same rational.
Claim 10 recites the limitation "the two-dimensional bar code " in line 2. There is insufficient antecedent basis for this limitation in the claim. Further, claim 10 recites the limitation “a position of an EM tracker member” in line 3. However, an EM tracker member is already mentioned in claim 7. It appears to the examiner that the “an EM tracker member” in claim 10 is a second EM tracker member. If it is referring to the EM tracker member of claim 7, applicant is advised to change it to “the EM tracker member.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 11 and 12 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of U.S. Patent No. US 12033295 B2 (reference patent). Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of the reference patent defines a method of touchless registration for a surgical procedure, comprising: scanning a region of interest (ROI) of a patient and a reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing an ROI digital mesh model from the collection of spatial data points; detecting the reference frame in the collection of spatial data points; detecting an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm; weighting the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and registering the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space. The claimed scope of the instant application is broader than the reference patent, which comprises limitation of weighted anatomical features. Therefore, the reference patent anticipates the instant application.
The following table illustrates the conflicting claim pairs:
Instant Application
1
11
12
Reference Patent
1
2
3
Claims of the instant application are compared to claims of Reference Patent in the following tables.
Instant Application
Reference Patent
1. A system for of non-contact patient registration for a surgical procedure, comprising:
a hand-held 3-D scanning device configured to obtain a 3-D scan of a region of interest (ROI) of a patient and a reference frame; and
a processor configured to execute instructions to:
construct a digital mesh model of the ROI and the reference frame from the 3-D scan;
determine a pose of the reference frame within the digital mesh model; and
register the ROI of the digital mesh model with the patient registration model, wherein anatomical features of the digital mesh model are aligned with anatomical features of the patient registration model.
1. A method of touchless registration for a surgical procedure, comprising:
scanning a region of interest (ROI) of a patient and a reference frame using a 3-D scanning device to capture a collection of spatial data points;
constructing an ROI digital mesh model from the collection of spatial data points;
detecting the reference frame in the collection of spatial data points;
constructing a reference frame digital mesh from the spatial data points;
registering a reference frame registration model with the reference frame mesh model; detecting an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm;
weighting the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and
registering the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space;
wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI.
Instant Application
Reference Patent
11. The system of claim 1, wherein the anatomical features comprise any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, a region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof.
2. The method of claim 1, wherein the anatomical feature comprises any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof.
Instant Application
Reference Patent
12. The system of claim 1, wherein the processor is further configured to create the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), and intraoperative CT images.
3. The method of claim 1, further comprising creating the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), intraoperative CT images.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-13 and 15-20 of U.S. Patent No. US 12080005 B2 (reference patent). Although the claims at issue are not identical, they are not patentably distinct from each other because the claimed scope of the instant application is broader than the reference patent, which comprises limitation of weighted anatomical features. Therefore, the reference patent anticipates the instant application.
The following table illustrates the conflicting claim pairs:
Instant Application
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Reference Patent
1
1
2
3
4
5
6
7
8
9
10
11
12
13
15
16
17
18
19
20
Claims of the instant application are compared to claims of Reference Patent in the following tables.
Instant Application
Reference Patent
1. A system for of non-contact patient registration for a surgical procedure, comprising:
a hand-held 3-D scanning device configured to obtain a 3-D scan of a region of interest (ROI) of a patient and a reference frame; and
a processor configured to execute instructions to:
construct a digital mesh model of the ROI and the reference frame from the 3-D scan;
determine a pose of the reference frame within the digital mesh model; and
register the ROI of the digital mesh model with the patient registration model, wherein anatomical features of the digital mesh model are aligned with anatomical features of the patient registration model.
1. A method of non-contact patient registration for a surgical procedure, comprising:
scanning a 3-D region of interest (ROI) of a patient and a reference frame using a hand-held 3-D scanning device to obtain a 3-D scan;
constructing a digital mesh model of the ROI and the reference frame from the 3-D scan;
determining a pose of the reference frame within the digital mesh model;
register a reference frame registration model with the digital mesh model of the reference frame; and
registering the ROI of the digital mesh model with a patient registration model of pre-acquired image data by detecting and weighting anatomical features of the ROI of the digital mesh model and detecting and weighting anatomical features of the ROI of the patient registration model;
wherein anatomical features of the digital mesh model are aligned with anatomical features of the patient registration model;
wherein the anatomical features include a first set of low weighted anatomical features and a second set of high weighted anatomical features.
Instant Application
Reference Patent
2. The system of claim 1, wherein the processor is further configured to detect the anatomical features of the ROI of the digital mesh model and the patient registration model.
1. A method of non-contact patient registration for a surgical procedure, comprising:
scanning a 3-D region of interest (ROI) of a patient and a reference frame using a hand-held 3-D scanning device to obtain a 3-D scan;
constructing a digital mesh model of the ROI and the reference frame from the 3-D scan;
determining a pose of the reference frame within the digital mesh model;
register a reference frame registration model with the digital mesh model of the reference frame; and
registering the ROI of the digital mesh model with a patient registration model of pre-acquired image data by detecting and weighting anatomical features of the ROI of the digital mesh model and detecting and weighting anatomical features of the ROI of the patient registration model;
wherein anatomical features of the digital mesh model are aligned with anatomical features of the patient registration model;
wherein the anatomical features include a first set of low weighted anatomical features and a second set of high weighted anatomical features.
Instant Application
Reference Patent
3. The system of claim 1, further comprising an instrument configured to be tracked relative to the reference frame via an optical or electromagnetic device.
2. The method of claim 1, further comprising tracking a pose of an instrument relative to the reference frame via an optical or electromagnetic device.
Instant Application
Reference Patent
4. The system of claim 1, wherein the digital mesh model comprises: an ROI digital mesh model; and a reference frame digital mesh model.
3. The method of claim 1, wherein the digital mesh model comprises: an ROI digital mesh model; and the reference frame digital mesh model.
Instant Application
Reference Patent
5. The system of claim 4, wherein the processor is further configured to: determine a position of the reference frame digital mesh model within the digital mesh model; and register a reference frame registration model with the reference frame mesh model.
4. The method of claim 3, further comprising: determining a position of the reference frame digital mesh model within the digital mesh model; registering the reference frame registration model with the reference frame mesh model; and display the reference frame registration model on the digital mesh model.
Instant Application
Reference Patent
6. The system of claim 1, further comprising the reference frame comprising an electromagnetic (EM) reference frame coupled to the patient within the ROI.
5. The method of claim 1, wherein the reference frame comprises an electromagnetic (EM) reference frame coupled to the patient within the ROI.
Instant Application
Reference Patent
7. The system of claim 6, wherein the EM reference frame comprises: an EM tracker member; an attachment selectively coupled to the EM tracker member; and an identifying label printed on a surface of the attachment.
6. The method of claim 5, wherein the EM reference frame comprises: an EM tracker member; an attachment selectively coupled to the EM tracker member; and an identifying label printed on a surface of the attachment.
Instant Application
Reference Patent
8. The system of claim 7, wherein the identifying label is a two-dimensional bar code.
7. The method of claim 6, wherein the identifying label is a two-dimensional bar code.
Instant Application
Reference Patent
9. The system of claim 7 wherein the attachment is color-coded.
8. The method of claim 6 wherein the attachment is color-coded.
Instant Application
Reference Patent
10. The system of claim 7, wherein the identifying label comprises a quick response code configured to provide coordinates of the two-dimensional bar code within the digital mesh model to determine a position of an EM tracker member within the digital mesh model.
9. The method of claim 6, wherein the identifying label comprises a quick response code configured to provide coordinates of two-dimensional bar code within the digital mesh model to determine a position of the EM tracker member within the digital mesh model.
Instant Application
Reference Patent
11. The system of claim 1, wherein the anatomical features comprise any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, a region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof.
10. The method of claim 1, wherein the anatomical features comprise any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, a region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof.
Instant Application
Reference Patent
12. The system of claim 1, wherein the processor is further configured to create the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), and intraoperative CT images.
11. The method of claim 1, further comprising creating the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), and intraoperative CT images.
Instant Application
Reference Patent
13. The system of claim 1, further comprising a workstation housing the processor and configured via a wireless communication to communicate with the 3-D scanning device.
12. The method of claim 1, further comprising transferring the digital mesh model from the 3-D scanning device to a workstation via a wireless communication technique.
Instant Application
Reference Patent
14. The system of claim 1, wherein the 3-D scanning device is handheld and whose pose is not tracked.
13. The method of claim 1, wherein the 3-D scanning device is handheld and whose pose is not tracked during scanning the ROI.
Instant Application
Reference Patent
15. A method of non-contact patient registration for a surgical procedure, comprising: spatially scanning a region of interest (ROI) of a patient and a reference frame structure using a handheld 3D-scanning device to capture a collection of spatial data points; constructing a digital mesh model from the collection of spatial data points, wherein the digital mesh model comprises: an ROI mesh model; and a reference frame mesh model; detecting the reference frame mesh model within the digital mesh model; registering the reference frame mesh model with a registration reference frame model; detecting anatomical features of the ROI mesh model and a patient registration model; and registering the ROI mesh model with the patient registration model utilizing the detected anatomical features, wherein the detected anatomical features of the ROI mesh model are aligned with the detected anatomical features of the patient registration model.
15. A method of non-contact patient registration for a surgical procedure, comprising: spatially scanning a region of interest (ROI) of a patient and a reference frame structure using a handheld 3D-scanning device to capture a collection of spatial data points; constructing a digital mesh model from the collection of spatial data points, wherein the digital mesh model comprises: an ROI mesh model; and a reference frame mesh model; detecting the reference frame mesh model within the digital mesh model; registering the reference frame mesh model with a registration reference frame model; displaying the registration reference frame model on the digital mesh model; detecting anatomical features of the ROI mesh model and anatomical features of a patient registration model pre-acquired image data; and registering the ROI mesh model with the patient registration model utilizing the detected anatomical features, wherein the detected anatomical features of the ROI mesh model are aligned with the detected anatomical features of the patient registration model; wherein the anatomical features as include a first set of low weighted anatomical features and second set of high weighted anatomical features.
Instant Application
Reference Patent
16. The method of claim 15, wherein the reference frame comprises an optical reference frame structure adjacent to the patient within the ROI, wherein the optical reference frame structure comprises: a body; a plurality of arms extending radially outward from the body; and a reflector coupled to each of the plurality of arms.
16. The method of claim 15, wherein the reference frame comprises an optical reference frame structure adjacent to the patient within the ROI, wherein the optical reference frame structure comprises: a body; a plurality of arms extending radially outward from the body; and a reflector coupled to each of the plurality of arms.
Instant Application
Reference Patent
17. The method of claim 16, wherein the optical reference frame structure further comprises an identifying label printed on a surface of an attachment.
17. The method of claim 16, wherein the optical reference frame structure further comprises an identifying label printed on a surface of an attachment.
Instant Application
Reference Patent
18. A method of non-contact patient registration for a surgical procedure, comprising: spatially scanning a region of interest (ROI) of a patient and an electromagnetic (EM) reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing a digital mesh model from the collection of spatial data points, wherein the digital mesh model comprises an ROI mesh model; determining a position of the EM reference frame within the digital mesh model using the spatial data points; detecting anatomical features of the ROI mesh model and a patient registration model; and registering the ROI mesh model with the patient registration model, wherein the detected anatomical features of the digital mesh model are aligned with the detected anatomical features of the patient registration model.
18. A method of non-contact patient registration for a surgical procedure, comprising: spatially scanning a region of interest (ROI) of a patient and an electromagnetic (EM) reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing a digital mesh model from the collection of spatial data points, wherein the digital mesh model comprises an ROI mesh model; determining a position of the EM reference frame within the digital mesh model using the spatial data points; detecting anatomical features of the ROI mesh model and anatomical features of a patient registration model of pre-acquired image data; and registering the ROI mesh model with the patient registration model; wherein the detected anatomical features of the digital mesh model are aligned with the detected anatomical features of the patient registration model; wherein the anatomical features of both the digital mesh model and the patient registration model are weighted based upon the type of anatomical feature; wherein the anatomical features include a first set of low weighted anatomical features and a second set of high weighted anatomical features.
Instant Application
Reference Patent
19. The method of claim 18, wherein the EM reference frame is attached to the patient within the ROI.
19. The method of claim 18, wherein the EM reference frame is attached to the patient within the ROI.
Instant Application
Reference Patent
20. The method of claim 18, wherein the position of the EM reference frame within the digital mesh model is determined utilizing position coordinates of a two-dimensional bar code coupled to the EM reference frame.
20. The method of claim 18, wherein the position of the EM reference frame within the digital mesh model is determined utilizing position coordinates of a two-dimensional bar code coupled to the EM reference frame.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6, 12-15 and 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Casas (US 20180262743 A1), referred herein as Casas.
Regarding Claim 1, Casas teaches a system for of non-contact patient registration for a surgical procedure, comprising (Casas Abstract: a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon…The computing device is coupled to the navigation system and comprises one or more processors and a non-transitory storage medium having stored thereon a computer-aided design (CAD) program. When executed by the one or more processors, the CAD program):
a hand-held 3-D scanning device configured to obtain a 3-D scan of a region of interest (ROI) of a patient and a reference frame; and a processor configured to execute instructions to (Casas [0108] tracking means 136 may include a tracking camera that works in conjunction with active or passive optical markers that are placed in the scene. In embodiments, the tracking camera may be part of the 3D scanner system 110; [0080] The 3D scanning and surface reconstruction 112 of the target portion of the patient 118 is made when the surgeon 128 desires the 3D scanning and surface reconstruction; [0081] a handheld 3D scanner is used, e.g. a portable 3D scanner forming part of the 3D scanner system 110 is held in the hand and moved around the target portion of the patient);
construct a digital mesh model of the ROI and the reference frame from the 3-D scan (Casas [0079] computer means 100 make use of techniques of surface reconstruction 112 (e.g. Delaunay triangulation, alpha shapes, ball pivoting, etc.) for converting the point cloud to a 3D surface model (e.g. polygon mesh models, surface models, or solid computer-aided design models));
determine a pose of the reference frame within the digital mesh model; and register the ROI of the digital mesh model with the patient registration model (Casas [0086] registration is done with the help of optical markers, e.g. color or reflective markers in certain predefined anatomical landmarks of the patient 118 for the 3D scanner system 110; [0103] 3D scanner system 110 is used in combination with multiple optical markers placed on the stereoscopic camera system 114, for the precise real-time tracking of its location and orientation. Alternatively, or in addition to the 3D scanner system 110, optical tracking means 136 are used for an accurate relative positioning of cameras 114),
wherein anatomical features of the digital mesh model are aligned with anatomical features of the patient registration model (Casas [0088] With the matching results, aligning the two coordinate systems is computed with software (e.g. Procrustes analysis). For a more precise registration, the preoperative images 102 are adjusted to the software needs (e.g. through segmentation by a histogram-based threshold). It will be understood that these registration methods are presented for the purpose of example, and are not intended to be limiting of the actual 3D volume-3D surface registration 120 method used in any manner).
Regarding Claim 2, Casas teaches the system of claim 1, and further teaches wherein the processor is further configured to detect the anatomical features of the ROI of the digital mesh model and the patient registration model (Casas [0192] make use of anatomical models and volume rendering 104 of individualized parts allowing for a precise registration of pose changes of the patient 118 to the 3D volume 320, by determining pose changes in real time 318 and translating them into a predefined 3D anatomical model with moveable inner structures).
Regarding Claim 3, Casas teaches the system of claim 1, and further teaches further comprising an instrument configured to be tracked relative to the reference frame via an optical or electromagnetic device (Casas [0108] Different kinds of tracking systems may be employed, either alone or combined, such as magnetic tracking, inertial tracking, ultrasonic tracking, electromagnetic tracking, etc.; [0192] The surgeon's 128 viewpoint (the stereoscopic camera system 114) in turn, is independent from the 3D scanner system 112, and can be fixed or dynamic).
Regarding Claim 4, Casas teaches the system of claim 1, and further teaches wherein the digital mesh model comprises:
an ROI digital mesh model (Casas [0075] For example, 2D slice images are acquired by a CT scan, a virtual camera is defined in space relative to the volume, iso-surfaces are extracted from the volume and rendered e.g. as polygonal meshes, or directly as a block of data; [0077] with the 3D volume of the CT scan of the healthy contralateral hemipelvis, or a combination of them, with interaction of the surgeon 128 and other users in real time, to obtain a precise 3D graphical representation of the fracture reduction obtained during surgery, before proceeding with the definitive fixation); and
a reference frame digital mesh model (Casas [0082] The two-dimensional images taken by the cameras are converted to point clouds or mesh surfaces, or both, using known methods for surface reconstruction, for example range imaging methods (e.g. structure-from-motion)).
Regarding Claim 5, Casas teaches the system of claim 4, and further teaches wherein the processor is further configured to:
determine a position of the reference frame digital mesh model within the digital mesh model (Casas [0081] multiple 3D scanners are also used for optical tracking 136 of instruments and devices 138, and for determining their location and orientation); and
register a reference frame registration model with the reference frame mesh model (Casas [[0086] The 3D/3D registration, such as the 3D volume-3D surface registration 120, as well as 2D/3D registration, is done by computer means 100).
Regarding Claim 6, Casas teaches the system of claim 1, and further teaches further comprising the reference frame comprising an electromagnetic (EM) reference frame coupled to the patient within the ROI (Casas [0108] Different kinds of tracking systems may be employed, either alone or combined, such as magnetic tracking, inertial tracking, ultrasonic tracking, electromagnetic tracking, etc.; [0034] Tracking means 136, such as optical markers, may be attached to the patient 118 providing anatomic landmarks of the patient during the preoperative 102 or intraoperative images 106).
Regarding Claim 12, Casas teaches the system of claim 1, and further teaches wherein the processor is further configured to create the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), and intraoperative CT images (Casas [0073] e.g. a group of 2D slice images acquired by a CT or MRI scan).
Regarding Claim 13, Casas teaches the system of claim 1, and further teaches further comprising a workstation housing the processor and configured via a wireless communication to communicate with the 3-D scanning device (Casas [0190] When included, communication subsystem 608 may be configured to communicatively couple computing system 600 with one or more other computing devices. Communication subsystem 608 may include wired and/or wireless communication devices compatible with one or more different communication protocols).
Regarding Claim 14, Casas teaches the system of claim 1, and further teaches wherein the 3-D scanning device is handheld and whose pose is not tracked (Casas [0077] a 3D scanner system 110 may be composed of one or more devices capable of capturing shapes of objects (and sometimes its appearance, e.g. color), usually outputting a point cloud as a data file, allowing for the construction of three-dimensional surface models of the object scanned). Casas do not explicitly disclose that the pose of the 3D scanning device is tracked. It is therefore admitted that the pose of the scanning device can be not tracked. Further, the purpose of the 3D scanner system is for collecting point cloud to generate a data file, but not determine the viewing position of the ROI, therefore is not necessarily to be tracked.
Regarding Claim 15, Casas teaches a method of non-contact patient registration for a surgical procedure, comprising (Casas Abstract: a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon):
The metes and bounds of the rest of the limitations correspond to the claims as set forth in Claims 1 and 2; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Regarding Claim 18, Casas teaches a method of non-contact patient registration for a surgical procedure, comprising (Casas Abstract: a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon):
The metes and bounds of the rest of the limitations correspond to the claims as set forth in Claims 1, 2 and 6; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7-11, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 20180262743 A1), referred herein as Casas in view of THORANAGHATTE (WO 2014122301 A1), referred herein as THORAN.
Regarding Claim 7, Casas teaches the system of claim 6, but does not teach every singled the claimed limitation herein. However, Casas in view of THORAN teaches wherein the EM reference frame comprises:
an EM tracker member (THORAN [0061] The surface segment tracker 208 is configured to track the plurality of points of the body and the plurality of points of the object in the surface-mesh received from the 3D surface-mesh generator 122, 123; [0089] The marker 201 with topographically distinct feature is placed on the forehead with a head band 202 to secure it. The three arms of the marker are of different length for unique surface registration possibility);
an attachment selectively coupled to the EM tracker member (Casas [0034] The The optical markers, alone or in combination with other tracking means, such as inertial measurement units (IMU), may be attached to the 3D scanner system 110, stereoscopic camera system 114, the surgeon 128 (e.g. in the head-mounted stereoscopic display 126)); and
an identifying label printed on a surface of the attachment (THORAN [0085] Fig. 1 1 shows a method where in automatic identification of the respective Computer Aided Design (CAD) model of a given tool. The tool can also be fixed with a square tag or a barcode for identification).
THORAN discloses a method and a system for tracking an object with respect to a body for image guided surgery, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Casas to incorporate the teachings of SRIM, and apply the structure of the surface segment tracker, as taught by SRIM to the optical tracking instrument of the real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon.
Doing so would provide an improved system and method for mapping navigation space to patient space in a medical procedure for the method and system for non-contact patient registration in image-guided surgery.
Regarding Claim 8, Casas in view of THORAN teaches the system of claim 7, and further teaches wherein the identifying label is a two-dimensional bar code (THORAN [0086] Fig. 12 shows a tool 302 with a square marker 301 with a binary code. The topographical T at the end of the tool facilitates to detect the exact position of the tool in the 3D surface-mesh).
Regarding Claim 9, Casas in view of THORAN teaches the system of claim 7, and further teaches wherein the attachment is color-coded (Casas [0086] registration is done with the help of optical markers, e.g. color or reflective markers in certain predefined anatomical landmarks of the patient 118 for the 3D scanner system 110).
Regarding Claim 10, Casas in view of THORAN teaches the system of claim 7, and further teaches wherein the identifying label comprises a quick response code configured to provide coordinates of the two-dimensional bar code within the digital mesh model to determine a position of an EM tracker member within the digital mesh model (THORAN [0085] The tool can also be fixed with a square tag or a barcode for identification; [0087] These markers are identified and tracked by the video camera 124. Initial estimation of the square markers 6D position, i.e. 3D position and orientation, is done by processing the video image).
Regarding Claim 11, Casas teaches the system of claim 1, but Casas in view of THORAN further teaches wherein the anatomical features comprise any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, a region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof (THORAN [0048] The body is in one embodiment a human body. The term body shall not only include the complete body, but also individual sub-parts of the body, like the head, the nose, the knee, the shoulder, etc.). The same motivation as claim 7 applies here.
Regarding Claims 19 and 20, Casas teaches the method of claim 18. The metes and bounds of the claims substantially correspond to the claims set forth in Claims 7 and 10; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 20180262743 A1), referred herein as Casas in view of SRIMOHANARAJAH et al. (US 20170238998 A1), referred herein as SRIM.
Regarding Claim 16, Casas teaches the method of claim 15, but does not teach every singled the claimed limitation herein. However, Casas in view of SRIM teaches wherein the reference frame comprises an optical reference frame structure adjacent to the patient within the ROI (Casas [0034] Tracking means 136, such as optical markers, may be attached to the patient 118 providing anatomic landmarks of the patient during the preoperative 102 or intraoperative images 106,), wherein the optical reference frame structure comprises:
a body (SRIM [0071] The wearable apparatus includes a rigid member 602 and a plurality of markers 604 attached to the rigid member 602; FIG. 6);
a plurality of arms extending radially outward from the body (SRIM [0071] Each of the plurality of markers 604 includes a reflective surface portion 606 visible by the camera 307 and a distinct identifiable portion 608 visible by the 3D scanner 309); and
a reflector coupled to each of the plurality of arms (SRIM [0071] Each of the plurality of markers 604 includes a reflective surface portion 606 visible by the camera 307).
SRIM discloses a method and a system for tracking an object with respect to a body for image guided surgery, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Casas to incorporate the teachings of SRIM, and apply the structure of the wearable tracking device, as taught by SRIM to the optical tracking instrument of the real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon.
Doing so would provide an improved system and method for mapping navigation space to patient space in a medical procedure for the method and system for non-contact patient registration in image-guided surgery.
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 20180262743 A1), referred herein as Casas in view of SRIMOHANARAJAH et al. (US 20170238998 A1), referred herein as SRIM further in view of THORANAGHATTE (WO 2014122301 A1), referred herein as THORAN.
Regarding Claim 17, Casas in view of SRIM teaches the method of claim 16, but does not teach every singled the claimed limitation herein. However THORAN teaches wherein the optical reference frame structure further comprises an identifying label printed on a surface of an attachment (THORAN [0085] Fig. 1 1 shows a method where in automatic identification of the respective Computer Aided Design (CAD) model of a given tool. The tool can also be fixed with a square tag or a barcode for identification). The same motivation as claim 7 applies here.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617