DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to because
Paragraph [0033] of Specification recited “As illustrated in FIG. 56”. FIG. 56 does not exist in the drawing figures.
Paragraph [0035] of Specification recited “As illustrated in FIG. 78”. FIG. 78 does not exist in the drawing figures either.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim(s) 1-20 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-22 of U.S. Patent No. US 12033295 B2 (reference patent). Although the claims at issue are not identical, they are not patentably distinct from each other because both of claims are essentially the same structure and perform essentially the same function, therefore unpatentable for obvious-type double patenting.
The following table illustrates the conflicting claim pairs:
Instant Appl.
1, 15
2, 16
3, 17
4, 18
5, 19
6, 20
7
8
9
10, 15
11, 15
12, 15
13
14
reference patent
1
1
10
11
12
13
14
2
3
1, 4
1
6
7
8
Claims of the instant application are compared to claims of Reference Patent in the following tables.
Instant Application
reference patent
1. A system for touchless registration for a surgical procedure, comprising: a reference frame; and a processer configured to execute instructions to, construct an ROI digital mesh model from a collection of spatial data points from a scan of a region of interest (ROI) of a patient and the reference frame; detect the reference frame in the collection of spatial data points; detect an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm; weight the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and register the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space.
1. A method of touchless registration for a surgical procedure, comprising: scanning a region of interest (ROI) of a patient and a reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing an ROI digital mesh model from the collection of spatial data points; detecting the reference frame in the collection of spatial data points; constructing a reference frame digital mesh from the spatial data points; registering a reference frame registration model with the reference frame mesh model; detecting an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm; weighting the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and registering the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space; wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI.
Instant Application
reference patent
2. The system of claim 1, wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI.
1. A method of touchless registration for a surgical procedure, comprising: scanning a region of interest (ROI) of a patient and a reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing an ROI digital mesh model from the collection of spatial data points; detecting the reference frame in the collection of spatial data points; constructing a reference frame digital mesh from the spatial data points; registering a reference frame registration model with the reference frame mesh model; detecting an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm; weighting the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and registering the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space; wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI.
Instant Application
reference patent
3. The system of claim 2, wherein the weighting is high when the position of the anatomical feature is repeatable relative to the ROI.
10. The method of claim 9, wherein the weighting is high when the position of the anatomical feature is repeatable relative to the ROI.
Instant Application
reference patent
4. The system of claim 3, wherein in the high weighted anatomical feature comprises any one of the bony contours around an eye, an eyebrow, a nose, a forehead region and any combination thereof.
11. The method of claim 10, wherein in the high weighted anatomical feature comprises any one of the bony contours around an eye, an eyebrow, a nose, a forehead region and any combination thereof.
Instant Application
reference patent
5. The system of claim 2, wherein the weighting is low when the position of the anatomical feature is variable relative to the ROI.
12. The method of claim 9, wherein the weighting is low when the position of the anatomical feature is variable relative to the ROI.
Instant Application
reference patent
6. The system of claim 5, wherein the low weighted anatomical feature is removed from the facial detection algorithm.
13. The method of claim 12, wherein the low weighted anatomical feature is removed from the facial detection algorithm.
Instant Application
reference patent
7. The system of claim 5, wherein the low weighted anatomical feature comprises any one a cheek region, a jaw region, a back of the head region, a region of an ear and any combination thereof.
14. The method of claim 12, wherein the low weighted anatomical feature comprises any one a cheek region, a jaw region, a back of the head region, a region of an ear and any combination thereof.
Instant Application
reference patent
8. The system of claim 1, wherein the anatomical feature comprises any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof.
2. The method of claim 1, wherein the anatomical feature comprises any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof.
Instant Application
reference patent
9. The system of claim 1, further comprising the processer creating the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), intraoperative CT images.
3. The method of claim 1, further comprising creating the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), intraoperative CT images.
Instant Application
reference patent
10. The system of claim 1, further comprising the processor: constructing a reference frame digital mesh model from the spatial data points; and detecting a location and position of the reference frame digital mesh model within a digital mesh model.
1. A method of touchless registration for a surgical procedure, comprising: scanning a region of interest (ROI) of a patient and a reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing an ROI digital mesh model from the collection of spatial data points; detecting the reference frame in the collection of spatial data points; constructing a reference frame digital mesh from the spatial data points; registering a reference frame registration model with the reference frame mesh model; detecting an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm; weighting the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and registering the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space; wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI.
4. The method of claim 1, further comprising: detecting a location and position of the reference frame digital mesh model within a digital mesh model.
Instant Application
reference patent
11. The system of claim 10, further comprising the processer registering a reference frame registration model with the reference frame mesh model.
1. A method of touchless registration for a surgical procedure, comprising: scanning a region of interest (ROI) of a patient and a reference frame using a 3-D scanning device to capture a collection of spatial data points; constructing an ROI digital mesh model from the collection of spatial data points; detecting the reference frame in the collection of spatial data points; constructing a reference frame digital mesh from the spatial data points; registering a reference frame registration model with the reference frame mesh model; detecting an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm; weighting the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model; and registering the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space; wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI.
Instant Application
reference patent
12. The system of claim 10, wherein the digital mesh model comprises: the ROI digital mesh model; and the reference frame digital mesh model.
6. The method of claim 4, wherein the digital mesh model comprises: the ROI digital mesh model; and the reference frame digital mesh model.
Instant Application
reference patent
13. The system of claim 1, wherein the reference frame comprises a structure disposed adjacent the ROI.
7. The method of claim 1, wherein the reference frame comprises a structure disposed adjacent the ROI.
Instant Application
reference patent
14. The system of claim 1, wherein the reference frame comprises an electromagnetic (EM) reference frame or an optical reference frame coupled to the patient within the ROI.
8. The method of claim 1, wherein the reference frame comprises an electromagnetic (EM) reference frame or an optical reference frame coupled to the patient within the ROI.
Claims 15-20 are rejected on the ground of nonstatutory double patenting for the same reason as claims 1-6 and 10-12.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 8, 9, 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 20180262743 A1), referred herein as Casas in view of VARADY et al. (US 20210088811 A1), referred herein as VARADY.
Regarding Claim 1, Casas in view of VARADY teaches a system for touchless registration for a surgical procedure, comprising (Casas Abstract: a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon):
a reference frame; and a processer configured to execute instructions to (Casas [0129] In real-time background subtraction methods the foreground objects in the video or 3D surface images are detected from the different frames (by using reference frames), and thus a background image or model is obtained; FIG. 1: 100):
construct an ROI digital mesh model from a collection of spatial data points from a scan of a region of interest (ROI) of a patient and the reference frame (Casas [0078] The 3D scanner 110 may be moved around the target portion of the patient 118 to obtain a precise 3D surface image of it through surface reconstruction 112. For example, computer means 100 may receive a dense 3D point cloud provided by the 3D scanning process, that represents the surface of the target portion of the patient 118 by a point cloud construction algorithm; [0079] converting the point cloud to a 3D surface model (e.g. polygon mesh models, surface models);
detect the reference frame in the collection of spatial data points (Casas [0081] multiple 3D scanners are also used for optical tracking 136 of instruments and devices 138, and for determining their location and orientation);
Casas does not but VARADY teaches
detect an anatomical feature of the ROI digital mesh model and a corresponding anatomical feature of a patient registration model utilizing a facial detection algorithm (VARADY [0041] a computer system (e.g., assessment platform 101) may analyze received digital input/image data to iteratively perform a sequence of feature detection, pose estimation, alignment, and model parameter adjustment. A face detection and pose estimation algorithm may be used);
weight the anatomical feature of the ROI digital mesh model and the corresponding anatomical feature of the patient registration model (VARADY [0038] Anatomic model 200 may be comprised of a mesh 201. The resolution of the mesh 201 may be altered based on curvature, location, and/or features on the user's face, etc. For example, mesh 201 around the eyes and nose may be higher resolution than mesh 201 at the top of the head); and
Casas in view of VARADY further teaches
register the ROI digital mesh model with the patient registration model utilizing the weighted anatomical features of the ROI digital mesh model and the patient registration model to generate a navigation space (Casas [0030] Registration of the 3D volume and the 3D surface 120 is performed by computer means 100, as is the registration of the stereoscopic video with the 3D surface 122. In embodiments, registration of 3D volume 104 and stereoscopic video 116 is completed through an intermediate registration of both images with the 3D surface image 112 into a common coordinate system).
VARADY discloses systems and methods for generating a 3D computer model of an eyewear product, which is analogous to the present patent application.
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Casas to incorporate the teachings of VARADY, and apply the human face modeling method, as taught by VARADY to the optical tracking instrument of the real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon.
Doing so would be better ensuring accuracy of the measurements for the method and system for non-contact patient registration in image-guided surgery.
Regarding Claim 8, Casas in view of VARADY teaches the system of claim 1, and further teaches wherein the anatomical feature comprises any one of a region of a nose, a region of an eye, a region of an ear, a region of a mouth, region of a cheek, a region of an eyebrow, a region of a jaw, and any combination thereof (VARADY [0038] the anatomic model 200 may include the front and side face area, though in other embodiments, the anatomic model 200 may model the entire head, while including more detail at the modeled eyes and nose).
Regarding Claim 9, Casas in view of VARADY teaches the system of claim 1, and further teaches further comprising the processer creating the patient registration model from any one of computed tomography (CT), magnetic resonance image (MRI), computer tomography angiography (CTA), magnetic resonance angiography (MRA), intraoperative CT images (Casas [0015] an intermediate 3D surface may be obtained by surface reconstruction via 3D scanners. The intermediate 3D surface may be used for registration with a 3D volume obtained by volume rendering via image data from a CT or MR scan).
Regarding Claim 13, Casas in view of VARADY teaches the system of claim 1, and further teaches wherein the reference frame comprises a structure disposed adjacent the ROI (Casas [0034] Tracking means 136, such as optical markers, may be attached to the patient 118 providing anatomic landmarks of the patient during the preoperative 102 or intraoperative images 106).
Regarding Claim 14, Casas in view of VARADY teaches the system of claim 1, and further teaches wherein the reference frame comprises an electromagnetic (EM) reference frame or an optical reference frame coupled to the patient within the ROI (Casas [0034] Tracking means 136, such as optical markers, may be attached to the patient 118 providing anatomic landmarks of the patient during the preoperative 102 or intraoperative images 106).
Claim(s) 10-12 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 20180262743 A1), referred herein as Casas in view of VARADY et al. (US 20210088811 A1), referred herein as VARADY further in view of THORANAGHATTE (WO 2014122301 A1), referred herein as THORAN.
Regarding Claim 10, Casas in view of VARADY teaches the system of claim 1, but does not teach the claimed limitation herein.
THORAN discloses a method and a system for tracking an object with respect to a body for image guided surgery, which is analogous to the present patent application. THOR teaches
constructing a reference frame digital mesh model from the spatial data points (THORAN [0084] Fig. 10 shows a possibility of using coloured strips on the body (patient anatomy) to segment and register the surface meshes. 41 1 and 412 are the coloured marker strips that can be pasted on the patient's skin); and
detecting a location and position of the reference frame digital mesh model within a digital mesh model (THORAN [0084] Fig. 12 shows a tool 302 with a square marker 301 with a binary code. The topographical T at the end of the tool facilitates to detect the exact position of the tool in the 3D surface-mesh).
It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Casas to incorporate the teachings of THORAN, and apply the mesh model of the surface segment tracker, as taught by THORAN to the optical tracking instrument of the real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon.
Doing so would provide an improved system and method for mapping navigation space to patient space in a medical procedure for the method and system for non-contact patient registration in image-guided surgery.
Regarding Claim 11, Casas in view of VARADY further in view of THORAN teaches the system of claim 10, and further teaches further comprising registering a reference frame registration model with the reference frame mesh model (THORAN [0074] Fig 4 shows how the 3D surface-mesh can be registered relative to the 3D model of the body, i.e. how the coordinates of the 3D surface- mesh of the body in the coordinate system of the 3D surface-mesh generator 122, 123 can be transformed to the coordinates of the 3D model of the body).
Regarding Claim 12, Casas in view of VARADY further in view of THORAN teaches the system of claim 10, and further teaches wherein the digital mesh model comprises:
the ROI digital mesh model (Casas [0080] The 3D scanning and surface reconstruction 112 of the target portion of the patient 118 is made when the surgeon 128 desires the 3D scanning and surface reconstruction; [0082] The two-dimensional images taken by the cameras are converted to point clouds or mesh surfaces); and
the reference frame digital mesh model (THORAN [0077] The steps 816, 817 and 818 could be either automated or approximate manual selection followed by pair-point based registration could be done. Once manually initialised these steps can be automated in next cycle by continuously tracking the surfaces using a priori positional information of these meshes in previous cycles).
Regarding Claim 15, Casas in view of VARADY further in view of THORAN teaches a method of non-contact registration for an image guided surgical procedure, comprising (Casas Abstract: a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon):
The metes and bounds of the rest of the limitations correspond to the claims as set forth in Claims 1, 10-12; thus they are rejected on similar grounds and rationale as their corresponding limitations.
Allowable Subject Matter
Claim(s) 2-7 and 16-20 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding Claim 2, Casas in view of VARADY teaches the system of claim 1. However, in the context of claim 1 and 2 as a whole, the prior art does not teach wherein the weighting of the anatomical feature is based on a level of repeatability of positions of the anatomical features relative to the ROI. Therefore, claim 2 in the context of claim 1 as a whole is allowable if rewritten in independent form.
Regarding Claim 16, Casas in view of VARADY teaches the method of claim 15. However, in the context of claim 16 and 15 as a whole, the prior art does not teach wherein the weighting of the anatomical feature is based on a level of repeatability of a position of the anatomical feature relative to the ROI. Therefore, claim 16 in the context of claim 15 as a whole is allowable.
The corresponding dependent claims are objected by virtue of their dependencies.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Samantha (YUEHAN) WANG/
Primary Examiner
Art Unit 2617