DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claim 1 is amended. Claims 1-19 are pending in this application.
Claim Objections
Claims 1-2 are objected to because of the following informalities: Claims 1 and 2 appear to be amended; however, the amendments are not underlined, or struck through, or enclosed in double brackets.
For the purpose of examination, Examiner will examine the previous claim set (Amendment filed on 06/25/2025) and the current amended (being) in line 15 of claim 1.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 7-15, and 17-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Choi et al., US 2022/0246269.
Regarding claim 1, Choi discloses a method for predicting a tooth crown and implant feature (fig. 3; para 0059; a process of a single placement mode for automatically placing an implant structure (includes a crown and a fixture) with respect to a single missing-tooth area), said method comprising the steps of:
receiving at least one of a volumetric or surface scan image, wherein the volumetric image is a three-dimensional voxel array of a maxillofacial anatomy of a patient and the surface scan image is a polygonal mesh of a maxillofacial anatomy of the patient (para 0013, 0044-0046; acquires teeth image data from a patient. The teeth image data required for placement of an implant structure includes a CT image, an oral model image, and the like; The oral model image may be obtained by scanning the inside of the patient's oral cavity with a 3D intra-oral scanner; The oral model images and CT image include an image obtained by imaging maxillary teeth under the maxillary teeth with the patient's mouth open, an image obtained by imaging mandibular teeth above the mandibular teeth with the patient's mouth open, and an image obtained by imaging a local area with the mouth closed, an oral radiograph, and the like);
segmenting at least one of the volumetric image or surface scan image into a set of distinct anatomical structures (fig. 3; step 1; para 0060-0062; the teeth image processing device automatically segments a tooth area; Subsequently, each tooth area is segmented by setting a boundary for each individual tooth) by assigning each voxel an identifier by structure and assigning at least one of a vertices, face, or points on the mesh an identifier by structure (para 0061-0062; subsequently, a tooth number (i.e., structure identifier) is designated to each segmented tooth area. For example, after a central point of each tooth area is selected, the tooth number is automatically set along a horizontal axis direction on the basis of the selected central point; Through the automatically set tooth number, an order and position of teeth in a row may be identified), wherein the distinct anatomical structures include at least one of a tooth, jaw, mandibular canal, maxillary sinus, fossae, and a missing tooth (figs. 2 and 4-5; para 0057, 0069, and 0072; the figures illustrate the anatomical structures of a tooth, jaw, mandibular canal, maxillary sinus, fossae, and a missing tooth); and
predicting a tooth crown and implant feature in place of the segmented missing tooth (figs. 3, 5, and 7; para 0062-0065 and 0072; the implant structure is placed with respect to the missing-tooth area; a placement position and size of the crown 310 (i.e., tooth crown) may be determined. A shape of the crown 310 is automatically determined according to a tooth number of a missing tooth; determines an initial position of the fixture 320 (i.e., implant feature), using the position of the crown 310 disposed on the CT image or the panoramic image. For example, the initial position of the fixture 320 is determined on the basis of an axis of the crown 310. That is, the initial position of the fixture 320 is determined to be positioned at the center of the crown 310 while having a central axis same as a central axis of the crown 310), wherein the predicted crown and implant feature is at least one of being generated or selected from a library (para 0040, 0045, and 0047; determining a position and size of a crown model in the oral model image of the patient, determining a position of an implant structure including a fixture in the CT image of the patient, designing a guide shape, and outputting a final guide; The oral model image may be obtained by scanning a plaster model generated by imitating a patient's oral cavity with a 3D scanner; The storage unit 12 (i.e., library) according to an example embodiment may store data of an oral model image and a CT image of an individual patient, and may provide, to the controller 14, an oral model image and a CT image of a specific patient from among the entire oral model images and CT images in response to a user's request at the time of dental treatment simulation).
Regarding claim 2, the method of claim 1, Choi further discloses wherein the features comprise at least one of location, orientation, dimension, or geometry of a crown and implant (fig. 3; para 0063-0065).
Regarding claim 3, the method of claim 1, Choi further discloses wherein the predicted crown and implant feature is generated based on a neural network output or rule-based (fig. 1; para 0048-0052; the controller 14 plans implant placement through control by a computer program and controls each component (i.e., rule-based) while performing a simulation on a teeth image according to an implant placement plan).
Regarding claim 7, the method of claim 1, Choi further discloses comprising the step of deriving a panoramic ribbon from the segmentation, wherein a slice from a region of interest (Rol) of the panoramic ribbon is extracted defining predicted targets for implant placement (fig. 3; para 0062-0064).
Regarding claim 8, the method of claim 7, Choi further discloses wherein the slice comprises anatomical measurements of at least one of a bone thickness or height (fig. 3; para 0060-0067).
Regarding claim 9, the method of claim 7, Choi further discloses wherein the slice comprises a distance from a first measurement line to a closest obstacle in implant direction that is either a mandibular canal, maxillary sinus, or a jaw bone edge (figs. 3-5; para 0060-0067).
Regarding claim 10, the method of claim 7, Choi further discloses wherein the slice comprises a vertical distance from an oral end of the first measurement line to a mandible bone edge (figs. 3-5; para 0060-0067).
Regarding claim 11, the method of claim 7, Choi further discloses wherein the slice comprises information related to a risk of collisions with a neural channel based on a minimal distance between the implant and anatomy structures of interest (figs. 3-5; para 0060-0067).
Regarding claim 12, the method of claim 1, Choi further discloses wherein the predicted tooth crown and implant features are comprised within an implant planning report, wherein the results relate to at least one of a location, orientation, dimension or geometry of a predicted crown, implant, and/or specific model of an implant (figs. 3-5; para 0060-0067).
Regarding claim 13, the method of claim 1, Choi further discloses wherein the predicted crown and implant feature is selected from a library of polygonal prototypes by finding a prototype with the closest position and geometry by selecting a set of points on the surface of either one of the predicted “missing tooth” and prototype, and running a pointset- matching algorithm for selecting the closest matched prototype (figs. 3-5; para 0060-0067).
Regarding claim 14, the method of claim 1, Choi further discloses comprising the step of normalizing an intensity value of the received volumetric image by eliminating the values lying outside a standard range to derive zero mean and unit standard deviation (figs. 3-5; para 0060-0067).
Regarding claim 15, the method of claim 1, Choi further discloses wherein the volumetric assignment is by finding a minimal bounding rectangle around the voxels belonging to a localized anatomical structure (fig. 3; para 0060).
Regarding claim 17, the method of claim 1, Choi further discloses wherein the surface scan assignment is by assigning each vertex and/or face of a mesh a distinct anatomical structure identifier (para 0061-0062).
Regarding claim 18, the method of claim 1, Choi further discloses wherein segmentation further comprises the step of assigning a voxel to a segmented tooth’s dental crown if the distance between this voxel and the tooth's highest point is within a predefined threshold (figs. 3; para 0060-0067).
Regarding claim 19, the method of claim 18, Choi further discloses wherein the pre-defined threshold of distance between the voxel and the tooth’s highest point is not greater than 6 mm for the lower (upper) jaw tooth (fig. 3; para 0066-0067).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4-6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al., US 2022/0246269 in view of Piche’ et al., US 2023/0277283.
Regarding claim 4, the method of claim 1, Choi further discloses wherein the missing tooth is predicted by: inputting one of a manually or a machine-produced segmentation of a radiological image or a surface scan, wherein the segmentation comprises of tooth segmentation, or tooth and anatomy segmentation; removing a random subset of segmented teeth from the input segmentation and replacing said teeth with background; and instructing a controller to predict one or more sites of missing teeth, and for said missing tooth sites, to predict a tooth segmentation, wherein the training target is the removed segmented tooth (fig. 3; para 0060-0062).
Choi discloses claim 4 as enumerated above, but Choi does not explicitly disclose a neural network as claimed.
However, Piche’ discloses a neural network (para 0029).
Therefore, taking the combined disclosures of Choi and Piche’ as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a neural network as taught by Piche’ into the invention of Choi for the benefit of yielding a volumetric surface representing the tooth to be reconstructed in its spatial context (Piche’: para 0029).
Regarding claim 5, the method of claim 1, Choi further discloses wherein the crown shape and/or position on an intraoral scan is generated by a controller trained to reconstruct a removed tooth crown from the segmented surface scan (fig. 3; para 0063).
Choi discloses claim 5 as enumerated above, but Choi does not explicitly disclose a neural network as claimed.
However, Piche’ discloses a neural network (para 0029).
Therefore, taking the combined disclosures of Choi and Piche’ as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a neural network as taught by Piche’ into the invention of Choi for the benefit of yielding a volumetric surface representing the tooth to be reconstructed in its spatial context (Piche’: para 0029).
Regarding claim 6, the method of claim 5, Choi in the combination further disclose wherein shape and/or position of the crown and implant is predicted based on the largest allowed shape within a predicted position segmented from non-implant areas (fig. 3; para 0062-0067).
Regarding claim 16, the method of claim 1, Choi does not explicitly disclose wherein the volumetric assignment is by defining a probability distribution over anatomical classes based on an output of a neural network probabilistic distribution for each of the anatomical structure as claimed.
However, Piche’ discloses the AI model is asked to specify to which class a tooth of a digital 3D representation of a mouth belongs. The output of the AI model may be a probability distribution over a plurality of classes. Each tooth of the mouth may be considered as a class (para 0032, 0042, and 0056).
Therefore, taking the combined disclosures of Choi and Piche’ as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the AI model is asked to specify to which class a tooth of a digital 3D representation of a mouth belongs. The output of the AI model may be a probability distribution over a plurality of classes. Each tooth of the mouth may be considered as a class as taught by Piche’ into the invention of Choi for the benefit of enabling the AI model to recognize and classify teeth at their position, missing teeth and preparations (Piche’: para 0056).
Response to Arguments
Applicant's arguments filed 12/15/2025 have been fully considered but they are not persuasive.
Regarding independent claim 1, Applicant argues that Choi does not disclose “1) assigning each voxel an identifier by structure and assigning at least one of a vertices, face or points on the mesh an identifier by structure and 2) predicting the tooth crown and the implant feature generated or selected from a library in place of the segmented missing tooth” as claimed. Examiner respectfully disagrees. As stated in the rejection above, Choi discloses 1) a tooth number (i.e., structure identifier) is designated to each segmented tooth area. For example, after a central point of each tooth area is selected, the tooth number is automatically set along a horizontal axis direction on the basis of the selected central point; Through the automatically set tooth number, an order and position of teeth in a row may be identified (para 0061-0062). Further the Examiner takes the position of rejecting the volumetric image based on the “OR” condition, therefore, the assigning at least one of a vertices, face or points on the mesh an identifier by structure of the surface scan image is not addressed. 2) determining a position and size of a crown model in the oral model image of the patient, determining a position of an implant structure including a fixture in the CT image of the patient, designing a guide shape, and outputting a final guide; The oral model image may be obtained by scanning a plaster model generated by imitating a patient's oral cavity with a 3D scanner; The storage unit 12 (i.e., library) according to an example embodiment may store data of an oral model image and a CT image of an individual patient, and may provide, to the controller 14, an oral model image and a CT image of a specific patient from among the entire oral model images and CT images in response to a user's request at the time of dental treatment simulation (para 0040, 0045, and 0047).
The MPEP 2111 states that the USPTO must employ the “broadest reasonable interpretation" of the claims. With the broadest reasonable interpretation, Examiner interprets the claimed “assigning each voxel an identifier by structure and predicting the tooth crown and the implant feature generated or selected from a library in place of the segmented missing tooth”, in light of the specification, as a tooth number (i.e., structure identifier) is designated to each segmented tooth area. The storage unit 12 (i.e., library) according to an example embodiment may store data of an oral model image and a CT image of an individual patient, and may provide, to the controller 14, an oral model image and a CT image of a specific patient from among the entire oral model images and CT images in response to a user's request at the time of dental treatment simulation.
Therefore, the claimed “assigning each voxel an identifier by structure and predicting the tooth crown and the implant feature generated or selected from a library in place of the segmented missing tooth” reads on the disclosure of Choi.
In view of the above arguments, the Examiner believes all rejections are proper and should be maintained.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAN D HUYNH/Primary Examiner, Art Unit 2665