Prosecution Insights
Last updated: April 18, 2026
Application No. 19/037,703

METHOD AND SYSTEM FOR GENERATION OF A MODEL FOR USE IN A VIRTUAL EXTRACTION PROCEDURE OF A TARGETED EXTRACTION OBJECT IN A PATIENT

Non-Final OA §DP
Filed
Jan 27, 2025
Examiner
MUSHAMBO, MARTIN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Institut Straumann AG
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
690 granted / 816 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
15 currently pending
Career history
831
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/03/2025, 01/30/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-7 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 7 of U.S. Patent No. 12288291 as described in the table below. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-7 of current application are obvious variations of claims 1-3, 7 of U.S. Patent No. 12288291. Current application 19037703 U.S. Patent No. 12288291 1. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising a 3D model of a socket being an equivalent of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model. 1. (Previously Presented) A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising: a 3D model of a socket being an equivalent of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; at least a portion of the identified 3D volumetric density model co-represented in the identified 3D surface model and/or at least a portion of the identified 3D surface model co-represented in the identified 3D volumetric density model; at least a portion of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; and/or at least a subset of the first dataset less portions of the identified 3D surface model co-represented in the identified 3D volumetric density model. 2. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object;identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising at least a portion of the identified 3D volumetric density model co-represented in the identified 3D surface model and/or at least a portion of the identified 3D surface model co-represented in the identified 3D volumetric density model. 1. (Previously Presented) A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising: a 3D model of a socket being an equivalent of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; at least a portion of the identified 3D volumetric density model co-represented in the identified 3D surface model and/or at least a portion of the identified 3D surface model co-represented in the identified 3D volumetric density model; at least a portion of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; and/or at least a subset of the first dataset less portions of the identified 3D surface model co-represented in the identified 3D volumetric density model. 3. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising at least a portion of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model. 1. (Previously Presented) A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising: a 3D model of a socket being an equivalent of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; at least a portion of the identified 3D volumetric density model co-represented in the identified 3D surface model and/or at least a portion of the identified 3D surface model co-represented in the identified 3D volumetric density model; at least a portion of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; and/or at least a subset of the first dataset less portions of the identified 3D surface model co-represented in the identified 3D volumetric density model. 4. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising at least a subset of the first dataset less portions of the identified 3D surface model co-represented in the identified 3D volumetric density model. 1. (Previously Presented) A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising: a 3D model of a socket being an equivalent of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; at least a portion of the identified 3D volumetric density model co-represented in the identified 3D surface model and/or at least a portion of the identified 3D surface model co-represented in the identified 3D volumetric density model; at least a portion of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; and/or at least a subset of the first dataset less portions of the identified 3D surface model co-represented in the identified 3D volumetric density model. 5. The method of any one of claims 1 to 4, the generation step comprising: determining portions of the identified 3D volumetric density model not co- represented in the identified 3D surface model; and, optionally, storing, in a computer-readable data storage component, the determined portions of the identified 3D volumetric density model as the 3D model of the socket. 2. (Currently Amended) The method of claim 1, the generation step comprising: determining portions of the identified 3D volumetric density model not co-represented in the identified 3D surface model; and storing, in a computer-readable data storage component, the determined portions of the identified 3D volumetric density-model as the 3D model of the socket. 6. The method of any one of claims 1 to 4, the generation step comprising: determining portions of the identified 3D volumetric density model co- represented in the identified 3D surface model; determining a difference between the determined co-represented portions and the identified 3D volumetric density model by removing the determined co-represented portions from the identified 3D volumetric density model; and, optionally, storing, in a computer-readable data storage component the determined difference as the 3D model of the socket. 3. (Currently Amended) The method of claim 1, the generation step comprising: determining portions of the identified 3D volumetric density model co-represented in the identified 3D surface model; determining a difference between the determined co-represented portions and the identified 3D volumetric density-model by removing the determined co-represented portions from the identified 3D volumetric density model; and storing, in a computer-readable data storage component the determined difference as the 3D model of the socket. 7. The method of any one of claims 1 to 4, comprising: determining a boundary of the identified 3D surface model and generating a cut- line therefrom; and projecting the cut-line onto the identified 3D volumetric density model. 7. (Previously Presented) The method of claim 1, comprising: determining a boundary of the identified 3D surface model and generating a cut-line therefrom; and projecting the cut-line onto the identified 3D volumetric density model. Allowable Subject Matter Claims 1-7 would be allowable if rewritten or amended to overcome the rejection(s) under Double Patenting, set forth in this Office action. The following is a statement of reasons for the indication of allowable subject matter: no prior art teaches the italicized and bolded features. Claim 1. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising a 3D model of a socket being an equivalent of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model.Claim 2. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising at least a portion of the identified 3D volumetric density model co-represented in the identified 3D surface model and/or at least a portion of the identified 3D surface model co-represented in the identified 3D volumetric density model.Claim 3. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising at least a portion of the identified 3D volumetric density model less portions of the identified 3D volumetric density model co-represented in the identified 3D surface model.Claim 4. A computer implemented method for automatically generating, by one or more computer processors, a 3-dimensional model for use in a virtual extraction of a targeted extraction object of a patient, the targeted extraction object comprising an object in a patient’s anatomy, the method based on each of a surface scan of an anatomical region of the patient’s oral cavity and a volumetric density scan of the anatomical region, the anatomical region including at least the targeted extraction object, the method comprising: receiving a first dataset of labeled surface scan segments, each surface scan segment comprising a 3-dimensional (3D) surface model of a corresponding object recognized and segmented from surface scan data of the surface scan, and each surface scan segment having an associated label identifying the surface scan segment; receiving a second dataset of labeled volumetric density scan segments, each volumetric density scan segment comprising a 3-dimensional (3D) volumetric density model of a boundary surface of a corresponding object recognized and segmented from volumetric density scan data of the volumetric density scan, and each volumetric density scan segment having an associated label identifying the volumetric density scan segment; cross-mounting in a common 3D coordinate system, labeled surface scan segments from the first dataset to labeled volumetric density scan segments from the second dataset; receiving identification of the targeted extraction object; identifying the 3D volumetric density model associated with the volumetric density scan segment label which corresponds to the identified targeted extraction object; identifying the 3D surface model associated with the surface scan segment label which corresponds to the identified targeted extraction object; generating a third dataset comprising a model comprising at least a subset of the first dataset less portions of the identified 3D surface model co-represented in the identified 3D volumetric density model.Claims 3-7 depend on anyone of allowable claims 1-4 and are therefore allowable for the same reasons as anyone of allowable claims 1-4. Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim 8 depend on anyone of allowable claims 1-4 and are therefore allowable for the same reasons as anyone of allowable claims 1-4. Relevant prior art:US 20180028065 Al Methods and apparatuses for generating a model of a subject's teeth. Described herein are intraoral scanning methods and apparatuses for generating a three - dimensional model of a subject's intraoral region (e.g., teeth) including both surface features and internal features. These methods and apparatuses may be used for identifying and evaluating lesions, caries and cracks in the teeth. Any of these methods and apparatuses may use minimum scattering coefficients and/or segmentation to form a volumetric model of the teeth. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN MUSHAMBO whose telephone number is (571)270-3390. The examiner can normally be reached Monday-Friday (8:00AM-5:00PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARTIN MUSHAMBO/Primary Examiner, Art Unit 2615 11/30/2025
Read full office action

Prosecution Timeline

Jan 27, 2025
Application Filed
Nov 30, 2025
Non-Final Rejection — §DP
Feb 18, 2026
Interview Requested
Feb 27, 2026
Examiner Interview Summary
Feb 27, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602892
WALLPAPER DISPLAY METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598282
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586331
SYSTEM AND METHOD FOR CHANGING OVERALL STYLE OF PUBLIC AREA BASED ON VIRTUAL SCENE
2y 5m to grant Granted Mar 24, 2026
Patent 12579754
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573146
PRODUCT PLACEMENT SYSTEMS AND METHODS FOR 3D PRODUCTIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month